Dec 5 01:41:26 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Dec 5 01:41:26 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Dec 5 01:41:26 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 5 01:41:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 5 01:41:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 5 01:41:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 5 01:41:26 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 5 01:41:26 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 5 01:41:26 localhost kernel: signal: max sigframe size: 1776 Dec 5 01:41:26 localhost kernel: BIOS-provided physical RAM map: Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 5 01:41:26 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Dec 5 01:41:26 localhost kernel: NX (Execute Disable) protection: active Dec 5 01:41:26 localhost kernel: SMBIOS 2.8 present. Dec 5 01:41:26 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 5 01:41:26 localhost kernel: Hypervisor detected: KVM Dec 5 01:41:26 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 5 01:41:26 localhost kernel: kvm-clock: using sched offset of 1934389820 cycles Dec 5 01:41:26 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 5 01:41:26 localhost kernel: tsc: Detected 2799.998 MHz processor Dec 5 01:41:26 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Dec 5 01:41:26 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 5 01:41:26 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Dec 5 01:41:26 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Dec 5 01:41:26 localhost kernel: Using GB pages for direct mapping Dec 5 01:41:26 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Dec 5 01:41:26 localhost kernel: ACPI: Early table checksum verification disabled Dec 5 01:41:26 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 5 01:41:26 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 5 01:41:26 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 5 01:41:26 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 5 01:41:26 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Dec 5 01:41:26 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 5 01:41:26 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 5 01:41:26 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Dec 5 01:41:26 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Dec 5 01:41:26 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Dec 5 01:41:26 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Dec 5 01:41:26 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Dec 5 01:41:26 localhost kernel: No NUMA configuration found Dec 5 01:41:26 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Dec 5 01:41:26 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Dec 5 01:41:26 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Dec 5 01:41:26 localhost kernel: Zone ranges: Dec 5 01:41:26 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 5 01:41:26 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 5 01:41:26 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 5 01:41:26 localhost kernel: Device empty Dec 5 01:41:26 localhost kernel: Movable zone start for each node Dec 5 01:41:26 localhost kernel: Early memory node ranges Dec 5 01:41:26 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 5 01:41:26 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Dec 5 01:41:26 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Dec 5 01:41:26 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Dec 5 01:41:26 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 5 01:41:26 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 5 01:41:26 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Dec 5 01:41:26 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Dec 5 01:41:26 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 5 01:41:26 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 5 01:41:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 5 01:41:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 5 01:41:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 5 01:41:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 5 01:41:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 5 01:41:26 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 5 01:41:26 localhost kernel: TSC deadline timer available Dec 5 01:41:26 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Dec 5 01:41:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Dec 5 01:41:26 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Dec 5 01:41:26 localhost kernel: Booting paravirtualized kernel on KVM Dec 5 01:41:26 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 5 01:41:26 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Dec 5 01:41:26 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Dec 5 01:41:26 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Dec 5 01:41:26 localhost kernel: Fallback order for Node 0: 0 Dec 5 01:41:26 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Dec 5 01:41:26 localhost kernel: Policy zone: Normal Dec 5 01:41:26 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 5 01:41:26 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Dec 5 01:41:26 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 5 01:41:26 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 5 01:41:26 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 5 01:41:26 localhost kernel: software IO TLB: area num 8. Dec 5 01:41:26 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Dec 5 01:41:26 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Dec 5 01:41:26 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Dec 5 01:41:26 localhost kernel: ftrace: allocating 44803 entries in 176 pages Dec 5 01:41:26 localhost kernel: ftrace: allocated 176 pages with 3 groups Dec 5 01:41:26 localhost kernel: Dynamic Preempt: voluntary Dec 5 01:41:26 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Dec 5 01:41:26 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Dec 5 01:41:26 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Dec 5 01:41:26 localhost kernel: #011Rude variant of Tasks RCU enabled. Dec 5 01:41:26 localhost kernel: #011Tracing variant of Tasks RCU enabled. Dec 5 01:41:26 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 5 01:41:26 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Dec 5 01:41:26 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Dec 5 01:41:26 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 5 01:41:26 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Dec 5 01:41:26 localhost kernel: random: crng init done (trusting CPU's manufacturer) Dec 5 01:41:26 localhost kernel: Console: colour VGA+ 80x25 Dec 5 01:41:26 localhost kernel: printk: console [tty0] enabled Dec 5 01:41:26 localhost kernel: printk: console [ttyS0] enabled Dec 5 01:41:26 localhost kernel: ACPI: Core revision 20211217 Dec 5 01:41:26 localhost kernel: APIC: Switch to symmetric I/O mode setup Dec 5 01:41:26 localhost kernel: x2apic enabled Dec 5 01:41:26 localhost kernel: Switched APIC routing to physical x2apic. Dec 5 01:41:26 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 5 01:41:26 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 5 01:41:26 localhost kernel: pid_max: default: 32768 minimum: 301 Dec 5 01:41:26 localhost kernel: LSM: Security Framework initializing Dec 5 01:41:26 localhost kernel: Yama: becoming mindful. Dec 5 01:41:26 localhost kernel: SELinux: Initializing. Dec 5 01:41:26 localhost kernel: LSM support for eBPF active Dec 5 01:41:26 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 5 01:41:26 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 5 01:41:26 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 5 01:41:26 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 5 01:41:26 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 5 01:41:26 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 5 01:41:26 localhost kernel: Spectre V2 : Mitigation: Retpolines Dec 5 01:41:26 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 5 01:41:26 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 5 01:41:26 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 5 01:41:26 localhost kernel: RETBleed: Mitigation: untrained return thunk Dec 5 01:41:26 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 5 01:41:26 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 5 01:41:26 localhost kernel: Freeing SMP alternatives memory: 36K Dec 5 01:41:26 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 5 01:41:26 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Dec 5 01:41:26 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 5 01:41:26 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 5 01:41:26 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 5 01:41:26 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 5 01:41:26 localhost kernel: ... version: 0 Dec 5 01:41:26 localhost kernel: ... bit width: 48 Dec 5 01:41:26 localhost kernel: ... generic registers: 6 Dec 5 01:41:26 localhost kernel: ... value mask: 0000ffffffffffff Dec 5 01:41:26 localhost kernel: ... max period: 00007fffffffffff Dec 5 01:41:26 localhost kernel: ... fixed-purpose events: 0 Dec 5 01:41:26 localhost kernel: ... event mask: 000000000000003f Dec 5 01:41:26 localhost kernel: rcu: Hierarchical SRCU implementation. Dec 5 01:41:26 localhost kernel: rcu: #011Max phase no-delay instances is 400. Dec 5 01:41:26 localhost kernel: smp: Bringing up secondary CPUs ... Dec 5 01:41:26 localhost kernel: x86: Booting SMP configuration: Dec 5 01:41:26 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Dec 5 01:41:26 localhost kernel: smp: Brought up 1 node, 8 CPUs Dec 5 01:41:26 localhost kernel: smpboot: Max logical packages: 8 Dec 5 01:41:26 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Dec 5 01:41:26 localhost kernel: node 0 deferred pages initialised in 24ms Dec 5 01:41:26 localhost kernel: devtmpfs: initialized Dec 5 01:41:26 localhost kernel: x86/mm: Memory block size: 128MB Dec 5 01:41:26 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 5 01:41:26 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Dec 5 01:41:26 localhost kernel: pinctrl core: initialized pinctrl subsystem Dec 5 01:41:26 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 5 01:41:26 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 5 01:41:26 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 5 01:41:26 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 5 01:41:26 localhost kernel: audit: initializing netlink subsys (disabled) Dec 5 01:41:26 localhost kernel: audit: type=2000 audit(1764916884.526:1): state=initialized audit_enabled=0 res=1 Dec 5 01:41:26 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Dec 5 01:41:26 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 5 01:41:26 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Dec 5 01:41:26 localhost kernel: cpuidle: using governor menu Dec 5 01:41:26 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Dec 5 01:41:26 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 5 01:41:26 localhost kernel: PCI: Using configuration type 1 for base access Dec 5 01:41:26 localhost kernel: PCI: Using configuration type 1 for extended access Dec 5 01:41:26 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 5 01:41:26 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Dec 5 01:41:26 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 5 01:41:26 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 5 01:41:26 localhost kernel: cryptd: max_cpu_qlen set to 1000 Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(Module Device) Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(Processor Device) Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 5 01:41:26 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 5 01:41:26 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 5 01:41:26 localhost kernel: ACPI: Interpreter enabled Dec 5 01:41:26 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Dec 5 01:41:26 localhost kernel: ACPI: Using IOAPIC for interrupt routing Dec 5 01:41:26 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 5 01:41:26 localhost kernel: PCI: Using E820 reservations for host bridge windows Dec 5 01:41:26 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 5 01:41:26 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 5 01:41:26 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Dec 5 01:41:26 localhost kernel: acpiphp: Slot [3] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [4] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [5] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [6] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [7] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [8] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [9] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [10] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [11] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [12] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [13] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [14] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [15] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [16] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [17] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [18] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [19] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [20] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [21] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [22] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [23] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [24] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [25] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [26] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [27] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [28] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [29] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [30] registered Dec 5 01:41:26 localhost kernel: acpiphp: Slot [31] registered Dec 5 01:41:26 localhost kernel: PCI host bridge to bus 0000:00 Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 5 01:41:26 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 5 01:41:26 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 5 01:41:26 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 5 01:41:26 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 5 01:41:26 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 5 01:41:26 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 5 01:41:26 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 5 01:41:26 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 5 01:41:26 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 5 01:41:26 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 5 01:41:26 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 5 01:41:26 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 5 01:41:26 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 5 01:41:26 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 5 01:41:26 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 5 01:41:26 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 5 01:41:26 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 5 01:41:26 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Dec 5 01:41:26 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 5 01:41:26 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 5 01:41:26 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 5 01:41:26 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 5 01:41:26 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 5 01:41:26 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 5 01:41:26 localhost kernel: iommu: Default domain type: Translated Dec 5 01:41:26 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 5 01:41:26 localhost kernel: SCSI subsystem initialized Dec 5 01:41:26 localhost kernel: ACPI: bus type USB registered Dec 5 01:41:26 localhost kernel: usbcore: registered new interface driver usbfs Dec 5 01:41:26 localhost kernel: usbcore: registered new interface driver hub Dec 5 01:41:26 localhost kernel: usbcore: registered new device driver usb Dec 5 01:41:26 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Dec 5 01:41:26 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 5 01:41:26 localhost kernel: PTP clock support registered Dec 5 01:41:26 localhost kernel: EDAC MC: Ver: 3.0.0 Dec 5 01:41:26 localhost kernel: NetLabel: Initializing Dec 5 01:41:26 localhost kernel: NetLabel: domain hash size = 128 Dec 5 01:41:26 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Dec 5 01:41:26 localhost kernel: NetLabel: unlabeled traffic allowed by default Dec 5 01:41:26 localhost kernel: PCI: Using ACPI for IRQ routing Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 5 01:41:26 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 5 01:41:26 localhost kernel: vgaarb: loaded Dec 5 01:41:26 localhost kernel: clocksource: Switched to clocksource kvm-clock Dec 5 01:41:26 localhost kernel: VFS: Disk quotas dquot_6.6.0 Dec 5 01:41:26 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 5 01:41:26 localhost kernel: pnp: PnP ACPI init Dec 5 01:41:26 localhost kernel: pnp: PnP ACPI: found 5 devices Dec 5 01:41:26 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 5 01:41:26 localhost kernel: NET: Registered PF_INET protocol family Dec 5 01:41:26 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 5 01:41:26 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 5 01:41:26 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 5 01:41:26 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 5 01:41:26 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 5 01:41:26 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 5 01:41:26 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Dec 5 01:41:26 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 5 01:41:26 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 5 01:41:26 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 5 01:41:26 localhost kernel: NET: Registered PF_XDP protocol family Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Dec 5 01:41:26 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Dec 5 01:41:26 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 5 01:41:26 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 5 01:41:26 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 5 01:41:26 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 27030 usecs Dec 5 01:41:26 localhost kernel: PCI: CLS 0 bytes, default 64 Dec 5 01:41:26 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 5 01:41:26 localhost kernel: Trying to unpack rootfs image as initramfs... Dec 5 01:41:26 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Dec 5 01:41:26 localhost kernel: ACPI: bus type thunderbolt registered Dec 5 01:41:26 localhost kernel: Initialise system trusted keyrings Dec 5 01:41:26 localhost kernel: Key type blacklist registered Dec 5 01:41:26 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Dec 5 01:41:26 localhost kernel: zbud: loaded Dec 5 01:41:26 localhost kernel: integrity: Platform Keyring initialized Dec 5 01:41:26 localhost kernel: NET: Registered PF_ALG protocol family Dec 5 01:41:26 localhost kernel: xor: automatically using best checksumming function avx Dec 5 01:41:26 localhost kernel: Key type asymmetric registered Dec 5 01:41:26 localhost kernel: Asymmetric key parser 'x509' registered Dec 5 01:41:26 localhost kernel: Running certificate verification selftests Dec 5 01:41:26 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Dec 5 01:41:26 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Dec 5 01:41:26 localhost kernel: io scheduler mq-deadline registered Dec 5 01:41:26 localhost kernel: io scheduler kyber registered Dec 5 01:41:26 localhost kernel: io scheduler bfq registered Dec 5 01:41:26 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Dec 5 01:41:26 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Dec 5 01:41:26 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Dec 5 01:41:26 localhost kernel: ACPI: button: Power Button [PWRF] Dec 5 01:41:26 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 5 01:41:26 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 5 01:41:26 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 5 01:41:26 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 5 01:41:26 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 5 01:41:26 localhost kernel: Non-volatile memory driver v1.3 Dec 5 01:41:26 localhost kernel: rdac: device handler registered Dec 5 01:41:26 localhost kernel: hp_sw: device handler registered Dec 5 01:41:26 localhost kernel: emc: device handler registered Dec 5 01:41:26 localhost kernel: alua: device handler registered Dec 5 01:41:26 localhost kernel: libphy: Fixed MDIO Bus: probed Dec 5 01:41:26 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Dec 5 01:41:26 localhost kernel: ehci-pci: EHCI PCI platform driver Dec 5 01:41:26 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Dec 5 01:41:26 localhost kernel: ohci-pci: OHCI PCI platform driver Dec 5 01:41:26 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Dec 5 01:41:26 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 5 01:41:26 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 5 01:41:26 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 5 01:41:26 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Dec 5 01:41:26 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Dec 5 01:41:26 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Dec 5 01:41:26 localhost kernel: usb usb1: Product: UHCI Host Controller Dec 5 01:41:26 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Dec 5 01:41:26 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Dec 5 01:41:26 localhost kernel: hub 1-0:1.0: USB hub found Dec 5 01:41:26 localhost kernel: hub 1-0:1.0: 2 ports detected Dec 5 01:41:26 localhost kernel: usbcore: registered new interface driver usbserial_generic Dec 5 01:41:26 localhost kernel: usbserial: USB Serial support registered for generic Dec 5 01:41:26 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 5 01:41:26 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 5 01:41:26 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 5 01:41:26 localhost kernel: mousedev: PS/2 mouse device common for all mice Dec 5 01:41:26 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 5 01:41:26 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 5 01:41:26 localhost kernel: rtc_cmos 00:04: registered as rtc0 Dec 5 01:41:26 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-05T06:41:25 UTC (1764916885) Dec 5 01:41:26 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 5 01:41:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Dec 5 01:41:26 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Dec 5 01:41:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Dec 5 01:41:26 localhost kernel: usbcore: registered new interface driver usbhid Dec 5 01:41:26 localhost kernel: usbhid: USB HID core driver Dec 5 01:41:26 localhost kernel: drop_monitor: Initializing network drop monitor service Dec 5 01:41:26 localhost kernel: Initializing XFRM netlink socket Dec 5 01:41:26 localhost kernel: NET: Registered PF_INET6 protocol family Dec 5 01:41:26 localhost kernel: Segment Routing with IPv6 Dec 5 01:41:26 localhost kernel: NET: Registered PF_PACKET protocol family Dec 5 01:41:26 localhost kernel: mpls_gso: MPLS GSO support Dec 5 01:41:26 localhost kernel: IPI shorthand broadcast: enabled Dec 5 01:41:26 localhost kernel: AVX2 version of gcm_enc/dec engaged. Dec 5 01:41:26 localhost kernel: AES CTR mode by8 optimization enabled Dec 5 01:41:26 localhost kernel: sched_clock: Marking stable (739737063, 178326603)->(1050429415, -132365749) Dec 5 01:41:26 localhost kernel: registered taskstats version 1 Dec 5 01:41:26 localhost kernel: Loading compiled-in X.509 certificates Dec 5 01:41:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 5 01:41:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Dec 5 01:41:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Dec 5 01:41:26 localhost kernel: zswap: loaded using pool lzo/zbud Dec 5 01:41:26 localhost kernel: page_owner is disabled Dec 5 01:41:26 localhost kernel: Key type big_key registered Dec 5 01:41:26 localhost kernel: Freeing initrd memory: 74232K Dec 5 01:41:26 localhost kernel: Key type encrypted registered Dec 5 01:41:26 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Dec 5 01:41:26 localhost kernel: Loading compiled-in module X.509 certificates Dec 5 01:41:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 5 01:41:26 localhost kernel: ima: Allocated hash algorithm: sha256 Dec 5 01:41:26 localhost kernel: ima: No architecture policies found Dec 5 01:41:26 localhost kernel: evm: Initialising EVM extended attributes: Dec 5 01:41:26 localhost kernel: evm: security.selinux Dec 5 01:41:26 localhost kernel: evm: security.SMACK64 (disabled) Dec 5 01:41:26 localhost kernel: evm: security.SMACK64EXEC (disabled) Dec 5 01:41:26 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Dec 5 01:41:26 localhost kernel: evm: security.SMACK64MMAP (disabled) Dec 5 01:41:26 localhost kernel: evm: security.apparmor (disabled) Dec 5 01:41:26 localhost kernel: evm: security.ima Dec 5 01:41:26 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Dec 5 01:41:26 localhost kernel: evm: security.capability Dec 5 01:41:26 localhost kernel: evm: HMAC attrs: 0x1 Dec 5 01:41:26 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Dec 5 01:41:26 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Dec 5 01:41:26 localhost kernel: usb 1-1: Product: QEMU USB Tablet Dec 5 01:41:26 localhost kernel: usb 1-1: Manufacturer: QEMU Dec 5 01:41:26 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Dec 5 01:41:26 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Dec 5 01:41:26 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Dec 5 01:41:26 localhost kernel: Freeing unused decrypted memory: 2036K Dec 5 01:41:26 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Dec 5 01:41:26 localhost kernel: Write protecting the kernel read-only data: 26624k Dec 5 01:41:26 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 5 01:41:26 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Dec 5 01:41:26 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Dec 5 01:41:26 localhost kernel: Run /init as init process Dec 5 01:41:26 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 5 01:41:26 localhost systemd[1]: Detected virtualization kvm. Dec 5 01:41:26 localhost systemd[1]: Detected architecture x86-64. Dec 5 01:41:26 localhost systemd[1]: Running in initrd. Dec 5 01:41:26 localhost systemd[1]: No hostname configured, using default hostname. Dec 5 01:41:26 localhost systemd[1]: Hostname set to . Dec 5 01:41:26 localhost systemd[1]: Initializing machine ID from VM UUID. Dec 5 01:41:26 localhost systemd[1]: Queued start job for default target Initrd Default Target. Dec 5 01:41:26 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 5 01:41:26 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 5 01:41:26 localhost systemd[1]: Reached target Initrd /usr File System. Dec 5 01:41:26 localhost systemd[1]: Reached target Local File Systems. Dec 5 01:41:26 localhost systemd[1]: Reached target Path Units. Dec 5 01:41:26 localhost systemd[1]: Reached target Slice Units. Dec 5 01:41:26 localhost systemd[1]: Reached target Swaps. Dec 5 01:41:26 localhost systemd[1]: Reached target Timer Units. Dec 5 01:41:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 5 01:41:26 localhost systemd[1]: Listening on Journal Socket (/dev/log). Dec 5 01:41:26 localhost systemd[1]: Listening on Journal Socket. Dec 5 01:41:26 localhost systemd[1]: Listening on udev Control Socket. Dec 5 01:41:26 localhost systemd[1]: Listening on udev Kernel Socket. Dec 5 01:41:26 localhost systemd[1]: Reached target Socket Units. Dec 5 01:41:26 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 5 01:41:26 localhost systemd[1]: Starting Journal Service... Dec 5 01:41:26 localhost systemd[1]: Starting Load Kernel Modules... Dec 5 01:41:26 localhost systemd[1]: Starting Create System Users... Dec 5 01:41:26 localhost systemd[1]: Starting Setup Virtual Console... Dec 5 01:41:26 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 5 01:41:26 localhost systemd[1]: Finished Load Kernel Modules. Dec 5 01:41:26 localhost systemd-journald[283]: Journal started Dec 5 01:41:26 localhost systemd-journald[283]: Runtime Journal (/run/log/journal/2b745fc25ba6425d9fc8d9117ea29cbc) is 8.0M, max 314.7M, 306.7M free. Dec 5 01:41:26 localhost systemd-modules-load[284]: Module 'msr' is built in Dec 5 01:41:26 localhost systemd[1]: Started Journal Service. Dec 5 01:41:26 localhost systemd[1]: Finished Setup Virtual Console. Dec 5 01:41:26 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Dec 5 01:41:26 localhost systemd[1]: Starting dracut cmdline hook... Dec 5 01:41:26 localhost systemd[1]: Starting Apply Kernel Variables... Dec 5 01:41:26 localhost systemd-sysusers[285]: Creating group 'sgx' with GID 997. Dec 5 01:41:26 localhost systemd-sysusers[285]: Creating group 'users' with GID 100. Dec 5 01:41:26 localhost systemd-sysusers[285]: Creating group 'dbus' with GID 81. Dec 5 01:41:26 localhost systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Dec 5 01:41:26 localhost systemd[1]: Finished Create System Users. Dec 5 01:41:26 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 5 01:41:26 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Dec 5 01:41:26 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 5 01:41:26 localhost systemd[1]: Finished Apply Kernel Variables. Dec 5 01:41:26 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 5 01:41:26 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 5 01:41:26 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 5 01:41:26 localhost systemd[1]: Finished dracut cmdline hook. Dec 5 01:41:26 localhost systemd[1]: Starting dracut pre-udev hook... Dec 5 01:41:26 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 5 01:41:26 localhost kernel: device-mapper: uevent: version 1.0.3 Dec 5 01:41:26 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Dec 5 01:41:26 localhost kernel: RPC: Registered named UNIX socket transport module. Dec 5 01:41:26 localhost kernel: RPC: Registered udp transport module. Dec 5 01:41:26 localhost kernel: RPC: Registered tcp transport module. Dec 5 01:41:26 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 5 01:41:26 localhost rpc.statd[410]: Version 2.5.4 starting Dec 5 01:41:26 localhost rpc.statd[410]: Initializing NSM state Dec 5 01:41:26 localhost rpc.idmapd[415]: Setting log level to 0 Dec 5 01:41:26 localhost systemd[1]: Finished dracut pre-udev hook. Dec 5 01:41:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 5 01:41:26 localhost systemd-udevd[428]: Using default interface naming scheme 'rhel-9.0'. Dec 5 01:41:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 5 01:41:26 localhost systemd[1]: Starting dracut pre-trigger hook... Dec 5 01:41:26 localhost systemd[1]: Finished dracut pre-trigger hook. Dec 5 01:41:26 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 5 01:41:26 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 5 01:41:26 localhost systemd[1]: Reached target System Initialization. Dec 5 01:41:26 localhost systemd[1]: Reached target Basic System. Dec 5 01:41:26 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 5 01:41:26 localhost systemd[1]: Reached target Network. Dec 5 01:41:26 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 5 01:41:26 localhost systemd[1]: Starting dracut initqueue hook... Dec 5 01:41:26 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Dec 5 01:41:26 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 5 01:41:26 localhost kernel: GPT:20971519 != 838860799 Dec 5 01:41:26 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Dec 5 01:41:26 localhost kernel: GPT:20971519 != 838860799 Dec 5 01:41:26 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 5 01:41:26 localhost kernel: vda: vda1 vda2 vda3 vda4 Dec 5 01:41:26 localhost systemd-udevd[466]: Network interface NamePolicy= disabled on kernel command line. Dec 5 01:41:26 localhost kernel: scsi host0: ata_piix Dec 5 01:41:26 localhost kernel: scsi host1: ata_piix Dec 5 01:41:26 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Dec 5 01:41:26 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Dec 5 01:41:26 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 5 01:41:26 localhost systemd[1]: Reached target Initrd Root Device. Dec 5 01:41:27 localhost kernel: ata1: found unknown device (class 0) Dec 5 01:41:27 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 5 01:41:27 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 5 01:41:27 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Dec 5 01:41:27 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 5 01:41:27 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 5 01:41:27 localhost systemd[1]: Finished dracut initqueue hook. Dec 5 01:41:27 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 5 01:41:27 localhost systemd[1]: Reached target Remote Encrypted Volumes. Dec 5 01:41:27 localhost systemd[1]: Reached target Remote File Systems. Dec 5 01:41:27 localhost systemd[1]: Starting dracut pre-mount hook... Dec 5 01:41:27 localhost systemd[1]: Finished dracut pre-mount hook. Dec 5 01:41:27 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Dec 5 01:41:27 localhost systemd-fsck[511]: /usr/sbin/fsck.xfs: XFS file system. Dec 5 01:41:27 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 5 01:41:27 localhost systemd[1]: Mounting /sysroot... Dec 5 01:41:27 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Dec 5 01:41:27 localhost kernel: XFS (vda4): Mounting V5 Filesystem Dec 5 01:41:27 localhost kernel: XFS (vda4): Ending clean mount Dec 5 01:41:27 localhost systemd[1]: Mounted /sysroot. Dec 5 01:41:27 localhost systemd[1]: Reached target Initrd Root File System. Dec 5 01:41:27 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Dec 5 01:41:27 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Dec 5 01:41:27 localhost systemd[1]: Reached target Initrd File Systems. Dec 5 01:41:27 localhost systemd[1]: Reached target Initrd Default Target. Dec 5 01:41:27 localhost systemd[1]: Starting dracut mount hook... Dec 5 01:41:27 localhost systemd[1]: Finished dracut mount hook. Dec 5 01:41:27 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Dec 5 01:41:27 localhost rpc.idmapd[415]: exiting on signal 15 Dec 5 01:41:27 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Dec 5 01:41:27 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Dec 5 01:41:27 localhost systemd[1]: Stopped target Network. Dec 5 01:41:27 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Dec 5 01:41:27 localhost systemd[1]: Stopped target Timer Units. Dec 5 01:41:27 localhost systemd[1]: dbus.socket: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Dec 5 01:41:27 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Dec 5 01:41:27 localhost systemd[1]: Stopped target Initrd Default Target. Dec 5 01:41:27 localhost systemd[1]: Stopped target Basic System. Dec 5 01:41:27 localhost systemd[1]: Stopped target Initrd Root Device. Dec 5 01:41:27 localhost systemd[1]: Stopped target Initrd /usr File System. Dec 5 01:41:27 localhost systemd[1]: Stopped target Path Units. Dec 5 01:41:27 localhost systemd[1]: Stopped target Remote File Systems. Dec 5 01:41:27 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Dec 5 01:41:27 localhost systemd[1]: Stopped target Slice Units. Dec 5 01:41:27 localhost systemd[1]: Stopped target Socket Units. Dec 5 01:41:27 localhost systemd[1]: Stopped target System Initialization. Dec 5 01:41:27 localhost systemd[1]: Stopped target Local File Systems. Dec 5 01:41:27 localhost systemd[1]: Stopped target Swaps. Dec 5 01:41:27 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut mount hook. Dec 5 01:41:27 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut pre-mount hook. Dec 5 01:41:27 localhost systemd[1]: Stopped target Local Encrypted Volumes. Dec 5 01:41:27 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Dec 5 01:41:27 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut initqueue hook. Dec 5 01:41:27 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 5 01:41:27 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Load Kernel Modules. Dec 5 01:41:27 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Create Volatile Files and Directories. Dec 5 01:41:27 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Coldplug All udev Devices. Dec 5 01:41:27 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut pre-trigger hook. Dec 5 01:41:27 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 5 01:41:27 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Setup Virtual Console. Dec 5 01:41:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Dec 5 01:41:27 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 5 01:41:27 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Closed udev Control Socket. Dec 5 01:41:27 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Closed udev Kernel Socket. Dec 5 01:41:27 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut pre-udev hook. Dec 5 01:41:27 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped dracut cmdline hook. Dec 5 01:41:27 localhost systemd[1]: Starting Cleanup udev Database... Dec 5 01:41:27 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Dec 5 01:41:27 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Create List of Static Device Nodes. Dec 5 01:41:27 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Stopped Create System Users. Dec 5 01:41:27 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 5 01:41:27 localhost systemd[1]: Finished Cleanup udev Database. Dec 5 01:41:27 localhost systemd[1]: Reached target Switch Root. Dec 5 01:41:27 localhost systemd[1]: Starting Switch Root... Dec 5 01:41:27 localhost systemd[1]: Switching root. Dec 5 01:41:27 localhost systemd-journald[283]: Journal stopped Dec 5 01:41:28 localhost systemd-journald[283]: Received SIGTERM from PID 1 (systemd). Dec 5 01:41:28 localhost kernel: audit: type=1404 audit(1764916887.940:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Dec 5 01:41:28 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 01:41:28 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 01:41:28 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 01:41:28 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 01:41:28 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 01:41:28 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 01:41:28 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 01:41:28 localhost kernel: audit: type=1403 audit(1764916888.023:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 5 01:41:28 localhost systemd[1]: Successfully loaded SELinux policy in 86.113ms. Dec 5 01:41:28 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.554ms. Dec 5 01:41:28 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 5 01:41:28 localhost systemd[1]: Detected virtualization kvm. Dec 5 01:41:28 localhost systemd[1]: Detected architecture x86-64. Dec 5 01:41:28 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 01:41:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 01:41:28 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 5 01:41:28 localhost systemd[1]: Stopped Switch Root. Dec 5 01:41:28 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 5 01:41:28 localhost systemd[1]: Created slice Slice /system/getty. Dec 5 01:41:28 localhost systemd[1]: Created slice Slice /system/modprobe. Dec 5 01:41:28 localhost systemd[1]: Created slice Slice /system/serial-getty. Dec 5 01:41:28 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Dec 5 01:41:28 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Dec 5 01:41:28 localhost systemd[1]: Created slice User and Session Slice. Dec 5 01:41:28 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 5 01:41:28 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Dec 5 01:41:28 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Dec 5 01:41:28 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 5 01:41:28 localhost systemd[1]: Stopped target Switch Root. Dec 5 01:41:28 localhost systemd[1]: Stopped target Initrd File Systems. Dec 5 01:41:28 localhost systemd[1]: Stopped target Initrd Root File System. Dec 5 01:41:28 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Dec 5 01:41:28 localhost systemd[1]: Reached target Path Units. Dec 5 01:41:28 localhost systemd[1]: Reached target rpc_pipefs.target. Dec 5 01:41:28 localhost systemd[1]: Reached target Slice Units. Dec 5 01:41:28 localhost systemd[1]: Reached target Swaps. Dec 5 01:41:28 localhost systemd[1]: Reached target Local Verity Protected Volumes. Dec 5 01:41:28 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Dec 5 01:41:28 localhost systemd[1]: Reached target RPC Port Mapper. Dec 5 01:41:28 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 5 01:41:28 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Dec 5 01:41:28 localhost systemd[1]: Listening on udev Control Socket. Dec 5 01:41:28 localhost systemd[1]: Listening on udev Kernel Socket. Dec 5 01:41:28 localhost systemd[1]: Mounting Huge Pages File System... Dec 5 01:41:28 localhost systemd[1]: Mounting POSIX Message Queue File System... Dec 5 01:41:28 localhost systemd[1]: Mounting Kernel Debug File System... Dec 5 01:41:28 localhost systemd[1]: Mounting Kernel Trace File System... Dec 5 01:41:28 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 5 01:41:28 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 5 01:41:28 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 5 01:41:28 localhost systemd[1]: Starting Load Kernel Module drm... Dec 5 01:41:28 localhost systemd[1]: Starting Load Kernel Module fuse... Dec 5 01:41:28 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Dec 5 01:41:28 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 5 01:41:28 localhost systemd[1]: Stopped File System Check on Root Device. Dec 5 01:41:28 localhost systemd[1]: Stopped Journal Service. Dec 5 01:41:28 localhost kernel: fuse: init (API version 7.36) Dec 5 01:41:28 localhost systemd[1]: Starting Journal Service... Dec 5 01:41:28 localhost systemd[1]: Starting Load Kernel Modules... Dec 5 01:41:28 localhost systemd[1]: Starting Generate network units from Kernel command line... Dec 5 01:41:28 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Dec 5 01:41:28 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Dec 5 01:41:28 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 5 01:41:28 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Dec 5 01:41:28 localhost systemd[1]: Mounted Huge Pages File System. Dec 5 01:41:28 localhost systemd-journald[618]: Journal started Dec 5 01:41:28 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/d70e7573f9252a22999953aab4dc4dc5) is 8.0M, max 314.7M, 306.7M free. Dec 5 01:41:28 localhost systemd[1]: Queued start job for default target Multi-User System. Dec 5 01:41:28 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 5 01:41:28 localhost systemd-modules-load[619]: Module 'msr' is built in Dec 5 01:41:28 localhost systemd[1]: Started Journal Service. Dec 5 01:41:28 localhost systemd[1]: Mounted POSIX Message Queue File System. Dec 5 01:41:28 localhost systemd[1]: Mounted Kernel Debug File System. Dec 5 01:41:28 localhost systemd[1]: Mounted Kernel Trace File System. Dec 5 01:41:28 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 5 01:41:28 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 5 01:41:28 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 5 01:41:28 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 5 01:41:28 localhost systemd[1]: Finished Load Kernel Module fuse. Dec 5 01:41:28 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Dec 5 01:41:28 localhost systemd[1]: Finished Load Kernel Modules. Dec 5 01:41:28 localhost kernel: ACPI: bus type drm_connector registered Dec 5 01:41:28 localhost systemd[1]: Finished Generate network units from Kernel command line. Dec 5 01:41:28 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 5 01:41:28 localhost systemd[1]: Finished Load Kernel Module drm. Dec 5 01:41:28 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Dec 5 01:41:28 localhost systemd[1]: Mounting FUSE Control File System... Dec 5 01:41:28 localhost systemd[1]: Mounting Kernel Configuration File System... Dec 5 01:41:28 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 5 01:41:28 localhost systemd[1]: Starting Rebuild Hardware Database... Dec 5 01:41:28 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Dec 5 01:41:28 localhost systemd[1]: Starting Load/Save Random Seed... Dec 5 01:41:28 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/d70e7573f9252a22999953aab4dc4dc5) is 8.0M, max 314.7M, 306.7M free. Dec 5 01:41:28 localhost systemd[1]: Starting Apply Kernel Variables... Dec 5 01:41:28 localhost systemd-journald[618]: Received client request to flush runtime journal. Dec 5 01:41:28 localhost systemd[1]: Starting Create System Users... Dec 5 01:41:28 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 5 01:41:28 localhost systemd[1]: Mounted FUSE Control File System. Dec 5 01:41:28 localhost systemd[1]: Mounted Kernel Configuration File System. Dec 5 01:41:28 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Dec 5 01:41:28 localhost systemd[1]: Finished Apply Kernel Variables. Dec 5 01:41:28 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989. Dec 5 01:41:28 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988. Dec 5 01:41:28 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Dec 5 01:41:28 localhost systemd[1]: Finished Load/Save Random Seed. Dec 5 01:41:28 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 5 01:41:28 localhost systemd[1]: Finished Create System Users. Dec 5 01:41:28 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 5 01:41:28 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 5 01:41:28 localhost systemd[1]: Reached target Preparation for Local File Systems. Dec 5 01:41:28 localhost systemd[1]: Set up automount EFI System Partition Automount. Dec 5 01:41:29 localhost systemd[1]: Finished Rebuild Hardware Database. Dec 5 01:41:29 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 5 01:41:29 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'. Dec 5 01:41:29 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 5 01:41:29 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 5 01:41:29 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Dec 5 01:41:29 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 5 01:41:29 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 5 01:41:29 localhost systemd-udevd[645]: Network interface NamePolicy= disabled on kernel command line. Dec 5 01:41:29 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Dec 5 01:41:29 localhost systemd[1]: Mounting /boot... Dec 5 01:41:29 localhost kernel: XFS (vda3): Mounting V5 Filesystem Dec 5 01:41:29 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Dec 5 01:41:29 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Dec 5 01:41:29 localhost kernel: XFS (vda3): Ending clean mount Dec 5 01:41:29 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Dec 5 01:41:29 localhost systemd[1]: Mounted /boot. Dec 5 01:41:29 localhost systemd-fsck[686]: fsck.fat 4.2 (2021-01-31) Dec 5 01:41:29 localhost systemd-fsck[686]: /dev/vda2: 12 files, 1782/51145 clusters Dec 5 01:41:29 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Dec 5 01:41:29 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 5 01:41:29 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Dec 5 01:41:29 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 5 01:41:29 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 5 01:41:29 localhost kernel: Console: switching to colour dummy device 80x25 Dec 5 01:41:29 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 5 01:41:29 localhost kernel: [drm] features: -context_init Dec 5 01:41:29 localhost kernel: [drm] number of scanouts: 1 Dec 5 01:41:29 localhost kernel: [drm] number of cap sets: 0 Dec 5 01:41:29 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Dec 5 01:41:29 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Dec 5 01:41:29 localhost kernel: Console: switching to colour frame buffer device 128x48 Dec 5 01:41:29 localhost kernel: SVM: TSC scaling supported Dec 5 01:41:29 localhost kernel: kvm: Nested Virtualization enabled Dec 5 01:41:29 localhost kernel: SVM: kvm: Nested Paging enabled Dec 5 01:41:29 localhost kernel: SVM: LBR virtualization supported Dec 5 01:41:29 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 5 01:41:29 localhost systemd[1]: Mounting /boot/efi... Dec 5 01:41:29 localhost systemd[1]: Mounted /boot/efi. Dec 5 01:41:29 localhost systemd[1]: Reached target Local File Systems. Dec 5 01:41:29 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Dec 5 01:41:29 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Dec 5 01:41:29 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 5 01:41:29 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 5 01:41:29 localhost systemd[1]: Starting Automatic Boot Loader Update... Dec 5 01:41:29 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Dec 5 01:41:29 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 5 01:41:29 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 710 (bootctl) Dec 5 01:41:29 localhost systemd[1]: Starting File System Check on /dev/vda2... Dec 5 01:41:29 localhost systemd[1]: Finished File System Check on /dev/vda2. Dec 5 01:41:29 localhost systemd[1]: Mounting EFI System Partition Automount... Dec 5 01:41:29 localhost systemd[1]: Mounted EFI System Partition Automount. Dec 5 01:41:29 localhost systemd[1]: Finished Automatic Boot Loader Update. Dec 5 01:41:29 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 5 01:41:29 localhost systemd[1]: Starting Security Auditing Service... Dec 5 01:41:29 localhost systemd[1]: Starting RPC Bind... Dec 5 01:41:29 localhost systemd[1]: Starting Rebuild Journal Catalog... Dec 5 01:41:29 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Dec 5 01:41:29 localhost auditd[725]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Dec 5 01:41:29 localhost auditd[725]: Init complete, auditd 3.0.7 listening for events (startup state enable) Dec 5 01:41:29 localhost systemd[1]: Finished Rebuild Journal Catalog. Dec 5 01:41:29 localhost systemd[1]: Starting Update is Completed... Dec 5 01:41:29 localhost systemd[1]: Started RPC Bind. Dec 5 01:41:29 localhost systemd[1]: Finished Update is Completed. Dec 5 01:41:29 localhost augenrules[730]: /sbin/augenrules: No change Dec 5 01:41:29 localhost augenrules[741]: No rules Dec 5 01:41:29 localhost augenrules[741]: enabled 1 Dec 5 01:41:29 localhost augenrules[741]: failure 1 Dec 5 01:41:29 localhost augenrules[741]: pid 725 Dec 5 01:41:29 localhost augenrules[741]: rate_limit 0 Dec 5 01:41:29 localhost augenrules[741]: backlog_limit 8192 Dec 5 01:41:29 localhost augenrules[741]: lost 0 Dec 5 01:41:29 localhost augenrules[741]: backlog 3 Dec 5 01:41:29 localhost augenrules[741]: backlog_wait_time 60000 Dec 5 01:41:29 localhost augenrules[741]: backlog_wait_time_actual 0 Dec 5 01:41:29 localhost augenrules[741]: enabled 1 Dec 5 01:41:29 localhost augenrules[741]: failure 1 Dec 5 01:41:29 localhost augenrules[741]: pid 725 Dec 5 01:41:29 localhost augenrules[741]: rate_limit 0 Dec 5 01:41:29 localhost augenrules[741]: backlog_limit 8192 Dec 5 01:41:29 localhost augenrules[741]: lost 0 Dec 5 01:41:29 localhost augenrules[741]: backlog 4 Dec 5 01:41:29 localhost augenrules[741]: backlog_wait_time 60000 Dec 5 01:41:29 localhost augenrules[741]: backlog_wait_time_actual 0 Dec 5 01:41:29 localhost augenrules[741]: enabled 1 Dec 5 01:41:29 localhost augenrules[741]: failure 1 Dec 5 01:41:29 localhost augenrules[741]: pid 725 Dec 5 01:41:29 localhost augenrules[741]: rate_limit 0 Dec 5 01:41:29 localhost augenrules[741]: backlog_limit 8192 Dec 5 01:41:29 localhost augenrules[741]: lost 0 Dec 5 01:41:29 localhost augenrules[741]: backlog 3 Dec 5 01:41:29 localhost augenrules[741]: backlog_wait_time 60000 Dec 5 01:41:29 localhost augenrules[741]: backlog_wait_time_actual 0 Dec 5 01:41:29 localhost systemd[1]: Started Security Auditing Service. Dec 5 01:41:29 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Dec 5 01:41:29 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Dec 5 01:41:29 localhost systemd[1]: Reached target System Initialization. Dec 5 01:41:29 localhost systemd[1]: Started dnf makecache --timer. Dec 5 01:41:29 localhost systemd[1]: Started Daily rotation of log files. Dec 5 01:41:29 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Dec 5 01:41:29 localhost systemd[1]: Reached target Timer Units. Dec 5 01:41:29 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 5 01:41:29 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Dec 5 01:41:29 localhost systemd[1]: Reached target Socket Units. Dec 5 01:41:29 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Dec 5 01:41:29 localhost systemd[1]: Starting D-Bus System Message Bus... Dec 5 01:41:29 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 5 01:41:29 localhost systemd[1]: Started D-Bus System Message Bus. Dec 5 01:41:29 localhost systemd[1]: Reached target Basic System. Dec 5 01:41:29 localhost journal[750]: Ready Dec 5 01:41:29 localhost systemd[1]: Starting NTP client/server... Dec 5 01:41:29 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Dec 5 01:41:29 localhost systemd[1]: Started irqbalance daemon. Dec 5 01:41:29 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Dec 5 01:41:29 localhost systemd[1]: Starting System Logging Service... Dec 5 01:41:29 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 01:41:29 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 01:41:29 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 01:41:29 localhost systemd[1]: Reached target sshd-keygen.target. Dec 5 01:41:29 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Dec 5 01:41:29 localhost systemd[1]: Reached target User and Group Name Lookups. Dec 5 01:41:29 localhost systemd[1]: Starting User Login Management... Dec 5 01:41:29 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Dec 5 01:41:29 localhost rsyslogd[758]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="758" x-info="https://www.rsyslog.com"] start Dec 5 01:41:29 localhost systemd[1]: Started System Logging Service. Dec 5 01:41:29 localhost rsyslogd[758]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Dec 5 01:41:29 localhost systemd-logind[760]: New seat seat0. Dec 5 01:41:29 localhost chronyd[765]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 5 01:41:29 localhost chronyd[765]: Using right/UTC timezone to obtain leap second data Dec 5 01:41:29 localhost chronyd[765]: Loaded seccomp filter (level 2) Dec 5 01:41:29 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Dec 5 01:41:29 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 5 01:41:29 localhost systemd[1]: Started NTP client/server. Dec 5 01:41:29 localhost systemd[1]: Started User Login Management. Dec 5 01:41:29 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 01:41:30 localhost cloud-init[769]: Cloud-init v. 22.1-9.el9 running 'init-local' at Fri, 05 Dec 2025 06:41:30 +0000. Up 5.32 seconds. Dec 5 01:41:30 localhost systemd[1]: Starting Hostname Service... Dec 5 01:41:30 localhost systemd[1]: Started Hostname Service. Dec 5 01:41:30 localhost systemd-hostnamed[784]: Hostname set to (static) Dec 5 01:41:30 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp_ymelawh.mount: Deactivated successfully. Dec 5 01:41:30 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Dec 5 01:41:30 localhost systemd[1]: Reached target Preparation for Network. Dec 5 01:41:30 localhost systemd[1]: Starting Network Manager... Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6713] NetworkManager (version 1.42.2-1.el9) is starting... (boot:3e2b16c7-3283-4f81-b3d8-c26351ffef2c) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6719] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 5 01:41:30 localhost systemd[1]: Started Network Manager. Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6753] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 5 01:41:30 localhost systemd[1]: Reached target Network. Dec 5 01:41:30 localhost systemd[1]: Starting Network Manager Wait Online... Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6839] manager[0x55e078d51020]: monitoring kernel firmware directory '/lib/firmware'. Dec 5 01:41:30 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6915] hostname: hostname: using hostnamed Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6915] hostname: static hostname changed from (none) to "np0005546419.novalocal" Dec 5 01:41:30 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.6922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 5 01:41:30 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 5 01:41:30 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Dec 5 01:41:30 localhost systemd[1]: Started GSSAPI Proxy Daemon. Dec 5 01:41:30 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 5 01:41:30 localhost systemd[1]: Reached target NFS client services. Dec 5 01:41:30 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7112] manager[0x55e078d51020]: rfkill: Wi-Fi hardware radio set enabled Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7116] manager[0x55e078d51020]: rfkill: WWAN hardware radio set enabled Dec 5 01:41:30 localhost systemd[1]: Reached target Remote File Systems. Dec 5 01:41:30 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7163] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7163] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7166] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7168] manager: Networking is enabled by state file Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7184] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7184] settings: Loaded settings plugin: keyfile (internal) Dec 5 01:41:30 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7210] dhcp: init: Using DHCP client 'internal' Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7213] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7227] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7233] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7240] device (lo): Activation: starting connection 'lo' (17dc3b4d-4509-4175-9eed-28194dbd11b8) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7249] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7253] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 5 01:41:30 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7287] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7289] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7290] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7295] device (eth0): carrier: link connected Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7298] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7316] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7340] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7345] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7346] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7349] manager: NetworkManager state is now CONNECTING Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7350] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7356] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7359] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 5 01:41:30 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7414] dhcp4 (eth0): state changed new lease, address=38.102.83.210 Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7417] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7439] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7454] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7457] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7462] device (lo): Activation: successful, device activated. Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7467] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7469] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7473] manager: NetworkManager state is now CONNECTED_SITE Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7476] device (eth0): Activation: successful, device activated. Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7480] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 5 01:41:30 localhost NetworkManager[789]: [1764916890.7484] manager: startup complete Dec 5 01:41:30 localhost systemd[1]: Finished Network Manager Wait Online. Dec 5 01:41:30 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Dec 5 01:41:31 localhost cloud-init[978]: Cloud-init v. 22.1-9.el9 running 'init' at Fri, 05 Dec 2025 06:41:30 +0000. Up 6.16 seconds. Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | eth0 | True | 38.102.83.210 | 255.255.255.0 | global | fa:16:3e:98:23:65 | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | eth0 | True | fe80::f816:3eff:fe98:2365/64 | . | link | fa:16:3e:98:23:65 | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | lo | True | ::1/128 | . | host | . | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | Route | Destination | Gateway | Interface | Flags | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: | 3 | multicast | :: | eth0 | U | Dec 5 01:41:31 localhost cloud-init[978]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 5 01:41:31 localhost systemd[1]: Starting Authorization Manager... Dec 5 01:41:31 localhost polkitd[1036]: Started polkitd version 0.117 Dec 5 01:41:31 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 5 01:41:31 localhost systemd[1]: Started Authorization Manager. Dec 5 01:41:33 localhost cloud-init[978]: Generating public/private rsa key pair. Dec 5 01:41:33 localhost cloud-init[978]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Dec 5 01:41:33 localhost cloud-init[978]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Dec 5 01:41:33 localhost cloud-init[978]: The key fingerprint is: Dec 5 01:41:33 localhost cloud-init[978]: SHA256:ct3cAPbOukukTItA56bSCG1+a5w6D9DigsluZvLijWE root@np0005546419.novalocal Dec 5 01:41:33 localhost cloud-init[978]: The key's randomart image is: Dec 5 01:41:33 localhost cloud-init[978]: +---[RSA 3072]----+ Dec 5 01:41:33 localhost cloud-init[978]: | o | Dec 5 01:41:33 localhost cloud-init[978]: | . o | Dec 5 01:41:33 localhost cloud-init[978]: | . . o | Dec 5 01:41:33 localhost cloud-init[978]: | o . o . = o | Dec 5 01:41:33 localhost cloud-init[978]: |+ + . + S o = . | Dec 5 01:41:33 localhost cloud-init[978]: |+B o + * + . | Dec 5 01:41:33 localhost cloud-init[978]: |+E=.+.. + o | Dec 5 01:41:33 localhost cloud-init[978]: |*+=++. . . | Dec 5 01:41:33 localhost cloud-init[978]: |**o*o o. | Dec 5 01:41:33 localhost cloud-init[978]: +----[SHA256]-----+ Dec 5 01:41:33 localhost cloud-init[978]: Generating public/private ecdsa key pair. Dec 5 01:41:33 localhost cloud-init[978]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Dec 5 01:41:33 localhost cloud-init[978]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Dec 5 01:41:33 localhost cloud-init[978]: The key fingerprint is: Dec 5 01:41:33 localhost cloud-init[978]: SHA256:Mz5Xzzg6UVDd2nh73DBq18EpyZYSwKPA/wScCfu7O6k root@np0005546419.novalocal Dec 5 01:41:33 localhost cloud-init[978]: The key's randomart image is: Dec 5 01:41:33 localhost cloud-init[978]: +---[ECDSA 256]---+ Dec 5 01:41:33 localhost cloud-init[978]: | ..o +..... . | Dec 5 01:41:33 localhost cloud-init[978]: | o.= o.. . .| Dec 5 01:41:33 localhost cloud-init[978]: | .o o ..o ++.| Dec 5 01:41:33 localhost cloud-init[978]: | .o . ..*=+o| Dec 5 01:41:33 localhost cloud-init[978]: | .S .+..*+| Dec 5 01:41:33 localhost cloud-init[978]: | ..+..o+..=| Dec 5 01:41:33 localhost cloud-init[978]: | .+ .oo.o .| Dec 5 01:41:33 localhost cloud-init[978]: | o.o.. . | Dec 5 01:41:33 localhost cloud-init[978]: | E.oo .. | Dec 5 01:41:33 localhost cloud-init[978]: +----[SHA256]-----+ Dec 5 01:41:33 localhost cloud-init[978]: Generating public/private ed25519 key pair. Dec 5 01:41:33 localhost cloud-init[978]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Dec 5 01:41:33 localhost cloud-init[978]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Dec 5 01:41:33 localhost cloud-init[978]: The key fingerprint is: Dec 5 01:41:33 localhost cloud-init[978]: SHA256:39+GTYtkXiwrrmMmQF+0SqOwlBxXb7sFYYZEIbWf+XU root@np0005546419.novalocal Dec 5 01:41:33 localhost cloud-init[978]: The key's randomart image is: Dec 5 01:41:33 localhost cloud-init[978]: +--[ED25519 256]--+ Dec 5 01:41:33 localhost cloud-init[978]: | .=*o+ | Dec 5 01:41:33 localhost cloud-init[978]: | . ...=.. | Dec 5 01:41:33 localhost cloud-init[978]: | . + ..+. | Dec 5 01:41:33 localhost cloud-init[978]: | = . ooo= | Dec 5 01:41:33 localhost cloud-init[978]: | . + + S= . ..E | Dec 5 01:41:33 localhost cloud-init[978]: | . o o .+..+.o.| Dec 5 01:41:33 localhost cloud-init[978]: | . ...= =+.| Dec 5 01:41:33 localhost cloud-init[978]: | . + . =.oo| Dec 5 01:41:33 localhost cloud-init[978]: | +.+.. ...| Dec 5 01:41:33 localhost cloud-init[978]: +----[SHA256]-----+ Dec 5 01:41:33 localhost sm-notify[1128]: Version 2.5.4 starting Dec 5 01:41:33 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Dec 5 01:41:33 localhost systemd[1]: Reached target Cloud-config availability. Dec 5 01:41:33 localhost systemd[1]: Reached target Network is Online. Dec 5 01:41:33 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Dec 5 01:41:33 localhost sshd[1129]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:33 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Dec 5 01:41:33 localhost systemd[1]: Starting Crash recovery kernel arming... Dec 5 01:41:33 localhost systemd[1]: Starting Notify NFS peers of a restart... Dec 5 01:41:33 localhost systemd[1]: Starting OpenSSH server daemon... Dec 5 01:41:33 localhost systemd[1]: Starting Permit User Sessions... Dec 5 01:41:33 localhost systemd[1]: Started Notify NFS peers of a restart. Dec 5 01:41:33 localhost systemd[1]: Finished Permit User Sessions. Dec 5 01:41:33 localhost systemd[1]: Started Command Scheduler. Dec 5 01:41:33 localhost systemd[1]: Started Getty on tty1. Dec 5 01:41:33 localhost systemd[1]: Started Serial Getty on ttyS0. Dec 5 01:41:33 localhost systemd[1]: Reached target Login Prompts. Dec 5 01:41:33 localhost systemd[1]: Started OpenSSH server daemon. Dec 5 01:41:33 localhost systemd[1]: Reached target Multi-User System. Dec 5 01:41:33 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Dec 5 01:41:33 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 5 01:41:33 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Dec 5 01:41:33 localhost kdumpctl[1133]: kdump: No kdump initial ramdisk found. Dec 5 01:41:33 localhost kdumpctl[1133]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Dec 5 01:41:33 localhost cloud-init[1239]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Fri, 05 Dec 2025 06:41:33 +0000. Up 8.62 seconds. Dec 5 01:41:33 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Dec 5 01:41:33 localhost systemd[1]: Starting Execute cloud user/final scripts... Dec 5 01:41:33 localhost cloud-init[1413]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Fri, 05 Dec 2025 06:41:33 +0000. Up 8.97 seconds. Dec 5 01:41:33 localhost dracut[1416]: dracut-057-21.git20230214.el9 Dec 5 01:41:33 localhost cloud-init[1433]: ############################################################# Dec 5 01:41:33 localhost cloud-init[1434]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Dec 5 01:41:33 localhost cloud-init[1436]: 256 SHA256:Mz5Xzzg6UVDd2nh73DBq18EpyZYSwKPA/wScCfu7O6k root@np0005546419.novalocal (ECDSA) Dec 5 01:41:33 localhost cloud-init[1438]: 256 SHA256:39+GTYtkXiwrrmMmQF+0SqOwlBxXb7sFYYZEIbWf+XU root@np0005546419.novalocal (ED25519) Dec 5 01:41:33 localhost cloud-init[1443]: 3072 SHA256:ct3cAPbOukukTItA56bSCG1+a5w6D9DigsluZvLijWE root@np0005546419.novalocal (RSA) Dec 5 01:41:33 localhost cloud-init[1445]: -----END SSH HOST KEY FINGERPRINTS----- Dec 5 01:41:33 localhost cloud-init[1447]: ############################################################# Dec 5 01:41:33 localhost dracut[1418]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Dec 5 01:41:34 localhost cloud-init[1413]: Cloud-init v. 22.1-9.el9 finished at Fri, 05 Dec 2025 06:41:34 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 9.20 seconds Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 5 01:41:34 localhost systemd[1]: Reloading Network Manager... Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 5 01:41:34 localhost NetworkManager[789]: [1764916894.1620] audit: op="reload" arg="0" pid=1572 uid=0 result="success" Dec 5 01:41:34 localhost NetworkManager[789]: [1764916894.1628] config: signal: SIGHUP (no changes from disk) Dec 5 01:41:34 localhost dracut[1418]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 5 01:41:34 localhost systemd[1]: Reloaded Network Manager. Dec 5 01:41:34 localhost dracut[1418]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 5 01:41:34 localhost systemd[1]: Finished Execute cloud user/final scripts. Dec 5 01:41:34 localhost systemd[1]: Reached target Cloud-init target. Dec 5 01:41:34 localhost dracut[1418]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 5 01:41:34 localhost dracut[1418]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 5 01:41:34 localhost dracut[1418]: memstrack is not available Dec 5 01:41:34 localhost dracut[1418]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 5 01:41:34 localhost dracut[1418]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 5 01:41:34 localhost dracut[1418]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 5 01:41:34 localhost dracut[1418]: memstrack is not available Dec 5 01:41:34 localhost dracut[1418]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 5 01:41:34 localhost dracut[1418]: *** Including module: systemd *** Dec 5 01:41:35 localhost dracut[1418]: *** Including module: systemd-initrd *** Dec 5 01:41:35 localhost dracut[1418]: *** Including module: i18n *** Dec 5 01:41:35 localhost dracut[1418]: No KEYMAP configured. Dec 5 01:41:35 localhost dracut[1418]: *** Including module: drm *** Dec 5 01:41:35 localhost dracut[1418]: *** Including module: prefixdevname *** Dec 5 01:41:35 localhost dracut[1418]: *** Including module: kernel-modules *** Dec 5 01:41:35 localhost chronyd[765]: Selected source 138.197.164.54 (2.rhel.pool.ntp.org) Dec 5 01:41:35 localhost chronyd[765]: System clock TAI offset set to 37 seconds Dec 5 01:41:36 localhost dracut[1418]: *** Including module: kernel-modules-extra *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: qemu *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: fstab-sys *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: rootfs-block *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: terminfo *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: udev-rules *** Dec 5 01:41:36 localhost dracut[1418]: Skipping udev rule: 91-permissions.rules Dec 5 01:41:36 localhost dracut[1418]: Skipping udev rule: 80-drivers-modprobe.rules Dec 5 01:41:36 localhost dracut[1418]: *** Including module: virtiofs *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: dracut-systemd *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: usrmount *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: base *** Dec 5 01:41:36 localhost dracut[1418]: *** Including module: fs-lib *** Dec 5 01:41:37 localhost dracut[1418]: *** Including module: kdumpbase *** Dec 5 01:41:37 localhost dracut[1418]: *** Including module: microcode_ctl-fw_dir_override *** Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl module: mangling fw_dir Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Dec 5 01:41:37 localhost sshd[2794]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Dec 5 01:41:37 localhost sshd[2810]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost sshd[2825]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-2d-07" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Dec 5 01:41:37 localhost sshd[2837]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost sshd[2848]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-4e-03" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Dec 5 01:41:37 localhost sshd[2859]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost sshd[2876]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-4f-01" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-55-04" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Dec 5 01:41:37 localhost sshd[2916]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-5e-03" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Dec 5 01:41:37 localhost sshd[2936]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-8c-01" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Dec 5 01:41:37 localhost dracut[1418]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Dec 5 01:41:37 localhost dracut[1418]: *** Including module: shutdown *** Dec 5 01:41:37 localhost dracut[1418]: *** Including module: squash *** Dec 5 01:41:37 localhost dracut[1418]: *** Including modules done *** Dec 5 01:41:37 localhost dracut[1418]: *** Installing kernel module dependencies *** Dec 5 01:41:38 localhost dracut[1418]: *** Installing kernel module dependencies done *** Dec 5 01:41:38 localhost dracut[1418]: *** Resolving executable dependencies *** Dec 5 01:41:39 localhost dracut[1418]: *** Resolving executable dependencies done *** Dec 5 01:41:39 localhost dracut[1418]: *** Hardlinking files *** Dec 5 01:41:39 localhost dracut[1418]: Mode: real Dec 5 01:41:39 localhost dracut[1418]: Files: 1099 Dec 5 01:41:39 localhost dracut[1418]: Linked: 3 files Dec 5 01:41:39 localhost dracut[1418]: Compared: 0 xattrs Dec 5 01:41:39 localhost dracut[1418]: Compared: 373 files Dec 5 01:41:39 localhost dracut[1418]: Saved: 61.04 KiB Dec 5 01:41:39 localhost dracut[1418]: Duration: 0.027647 seconds Dec 5 01:41:39 localhost dracut[1418]: *** Hardlinking files done *** Dec 5 01:41:39 localhost dracut[1418]: Could not find 'strip'. Not stripping the initramfs. Dec 5 01:41:39 localhost dracut[1418]: *** Generating early-microcode cpio image *** Dec 5 01:41:39 localhost dracut[1418]: *** Constructing AuthenticAMD.bin *** Dec 5 01:41:39 localhost dracut[1418]: *** Store current command line parameters *** Dec 5 01:41:39 localhost dracut[1418]: Stored kernel commandline: Dec 5 01:41:39 localhost dracut[1418]: No dracut internal kernel commandline stored in the initramfs Dec 5 01:41:39 localhost dracut[1418]: *** Install squash loader *** Dec 5 01:41:40 localhost dracut[1418]: *** Squashing the files inside the initramfs *** Dec 5 01:41:40 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 5 01:41:40 localhost dracut[1418]: *** Squashing the files inside the initramfs done *** Dec 5 01:41:40 localhost dracut[1418]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Dec 5 01:41:41 localhost dracut[1418]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Dec 5 01:41:41 localhost kdumpctl[1133]: kdump: kexec: loaded kdump kernel Dec 5 01:41:41 localhost kdumpctl[1133]: kdump: Starting kdump: [OK] Dec 5 01:41:41 localhost systemd[1]: Finished Crash recovery kernel arming. Dec 5 01:41:41 localhost systemd[1]: Startup finished in 1.182s (kernel) + 1.930s (initrd) + 13.671s (userspace) = 16.784s. Dec 5 01:42:00 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 5 01:44:10 localhost systemd[1]: Unmounting EFI System Partition Automount... Dec 5 01:44:10 localhost systemd[1]: efi.mount: Deactivated successfully. Dec 5 01:44:10 localhost systemd[1]: Unmounted EFI System Partition Automount. Dec 5 01:45:53 localhost sshd[4176]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:46:21 localhost sshd[4177]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:46:21 localhost systemd[1]: Created slice User Slice of UID 1000. Dec 5 01:46:21 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Dec 5 01:46:21 localhost systemd-logind[760]: New session 1 of user zuul. Dec 5 01:46:21 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Dec 5 01:46:21 localhost systemd[1]: Starting User Manager for UID 1000... Dec 5 01:46:21 localhost systemd[4181]: Queued start job for default target Main User Target. Dec 5 01:46:21 localhost systemd[4181]: Created slice User Application Slice. Dec 5 01:46:21 localhost systemd[4181]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 01:46:21 localhost systemd[4181]: Started Daily Cleanup of User's Temporary Directories. Dec 5 01:46:21 localhost systemd[4181]: Reached target Paths. Dec 5 01:46:21 localhost systemd[4181]: Reached target Timers. Dec 5 01:46:21 localhost systemd[4181]: Starting D-Bus User Message Bus Socket... Dec 5 01:46:21 localhost systemd[4181]: Starting Create User's Volatile Files and Directories... Dec 5 01:46:21 localhost systemd[4181]: Finished Create User's Volatile Files and Directories. Dec 5 01:46:21 localhost systemd[4181]: Listening on D-Bus User Message Bus Socket. Dec 5 01:46:21 localhost systemd[4181]: Reached target Sockets. Dec 5 01:46:21 localhost systemd[4181]: Reached target Basic System. Dec 5 01:46:21 localhost systemd[4181]: Reached target Main User Target. Dec 5 01:46:21 localhost systemd[4181]: Startup finished in 107ms. Dec 5 01:46:22 localhost systemd[1]: Started User Manager for UID 1000. Dec 5 01:46:22 localhost systemd[1]: Started Session 1 of User zuul. Dec 5 01:46:22 localhost python3[4233]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 01:46:30 localhost python3[4251]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 01:46:39 localhost python3[4303]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 01:46:40 localhost python3[4333]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Dec 5 01:46:42 localhost python3[4349]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:46:43 localhost python3[4363]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:44 localhost python3[4422]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:46:45 localhost python3[4463]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764917204.4415638-390-267865725939231/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa follow=False checksum=279822aa185303a1622fd64f0c6305b91ca04c54 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:46 localhost python3[4536]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:46:46 localhost python3[4577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764917206.0069811-486-80467609326734/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa.pub follow=False checksum=a810c4b730db53312f48868578c3039315af7db6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:48 localhost python3[4605]: ansible-ping Invoked with data=pong Dec 5 01:46:50 localhost python3[4619]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 01:46:53 localhost python3[4671]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Dec 5 01:46:56 localhost python3[4693]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:56 localhost python3[4707]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:56 localhost python3[4721]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:57 localhost python3[4735]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:57 localhost python3[4749]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:46:58 localhost python3[4764]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:00 localhost python3[4780]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:02 localhost python3[4828]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:47:02 localhost python3[4871]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764917222.1134434-98-125089660372541/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:10 localhost python3[4899]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:10 localhost python3[4913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:10 localhost python3[4927]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:11 localhost python3[4941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:11 localhost python3[4955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:11 localhost python3[4969]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:11 localhost python3[4983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:12 localhost python3[4997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:12 localhost python3[5011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:12 localhost python3[5025]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:13 localhost python3[5039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:13 localhost python3[5053]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:13 localhost python3[5067]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:13 localhost python3[5081]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:14 localhost python3[5095]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:14 localhost python3[5109]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:14 localhost python3[5123]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:15 localhost python3[5137]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:15 localhost python3[5151]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:15 localhost python3[5165]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:15 localhost python3[5179]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:16 localhost python3[5193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:16 localhost python3[5207]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:16 localhost python3[5221]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:16 localhost python3[5235]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:17 localhost python3[5249]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 01:47:19 localhost python3[5265]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 5 01:47:19 localhost systemd[1]: Starting Time & Date Service... Dec 5 01:47:19 localhost systemd[1]: Started Time & Date Service. Dec 5 01:47:19 localhost systemd-timedated[5267]: Changed time zone to 'UTC' (UTC). Dec 5 01:47:21 localhost python3[5286]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:22 localhost python3[5332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:47:22 localhost python3[5373]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764917242.2799768-493-87572732624871/source _original_basename=tmp0b69ggu4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:24 localhost python3[5433]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:47:24 localhost python3[5474]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764917243.9194486-582-26315927236915/source _original_basename=tmpw2ro7s9n follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:26 localhost python3[5536]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:47:26 localhost python3[5579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764917246.042049-729-274093327966003/source _original_basename=tmpys8gr5zz follow=False checksum=675da38221554070fad736c9d717667e6ac7d120 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:27 localhost python3[5607]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 01:47:28 localhost python3[5623]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 01:47:29 localhost python3[5673]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:47:29 localhost python3[5716]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764917249.2936637-852-173813446756414/source _original_basename=tmppy451oh0 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:47:31 localhost python3[5747]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-1075-137b-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 01:47:42 localhost python3[5765]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-1075-137b-000000000024-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Dec 5 01:47:49 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 5 01:47:54 localhost python3[5786]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:48:12 localhost python3[5804]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:48:29 localhost systemd[4181]: Starting Mark boot as successful... Dec 5 01:48:29 localhost systemd[4181]: Finished Mark boot as successful. Dec 5 01:49:12 localhost systemd-logind[760]: Session 1 logged out. Waiting for processes to exit. Dec 5 01:51:29 localhost systemd[4181]: Created slice User Background Tasks Slice. Dec 5 01:51:29 localhost systemd[4181]: Starting Cleanup of User's Temporary Files and Directories... Dec 5 01:51:29 localhost systemd[4181]: Finished Cleanup of User's Temporary Files and Directories. Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Dec 5 01:51:29 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Dec 5 01:51:29 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8576] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 5 01:51:29 localhost systemd-udevd[5807]: Network interface NamePolicy= disabled on kernel command line. Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8678] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8706] settings: (eth1): created default wired connection 'Wired connection 1' Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8710] device (eth1): carrier: link connected Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8713] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8718] policy: auto-activating connection 'Wired connection 1' (c23ab6f1-f5f7-3703-98f4-3dea3c2f7709) Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8724] device (eth1): Activation: starting connection 'Wired connection 1' (c23ab6f1-f5f7-3703-98f4-3dea3c2f7709) Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8725] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8729] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8735] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 5 01:51:29 localhost NetworkManager[789]: [1764917489.8739] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 5 01:51:30 localhost sshd[5810]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:51:30 localhost systemd-logind[760]: New session 3 of user zuul. Dec 5 01:51:30 localhost systemd[1]: Started Session 3 of User zuul. Dec 5 01:51:30 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Dec 5 01:51:30 localhost python3[5827]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-be73-2a20-00000000039b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 01:51:44 localhost python3[5877]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:51:44 localhost python3[5920]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764917503.8418117-435-64393736617642/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=33a85e1ab27255d2610bea5b8490ed8d57d28cfd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:51:45 localhost python3[5950]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 01:51:45 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Dec 5 01:51:45 localhost systemd[1]: Stopped Network Manager Wait Online. Dec 5 01:51:45 localhost systemd[1]: Stopping Network Manager Wait Online... Dec 5 01:51:45 localhost systemd[1]: Stopping Network Manager... Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.1871] caught SIGTERM, shutting down normally. Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.1932] dhcp4 (eth0): canceled DHCP transaction Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.1933] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.1933] dhcp4 (eth0): state changed no lease Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.1936] manager: NetworkManager state is now CONNECTING Dec 5 01:51:45 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.2023] dhcp4 (eth1): canceled DHCP transaction Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.2024] dhcp4 (eth1): state changed no lease Dec 5 01:51:45 localhost NetworkManager[789]: [1764917505.2076] exiting (success) Dec 5 01:51:45 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 5 01:51:45 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Dec 5 01:51:45 localhost systemd[1]: Stopped Network Manager. Dec 5 01:51:45 localhost systemd[1]: NetworkManager.service: Consumed 3.676s CPU time. Dec 5 01:51:45 localhost systemd[1]: Starting Network Manager... Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.2504] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:3e2b16c7-3283-4f81-b3d8-c26351ffef2c) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.2507] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.2524] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 5 01:51:45 localhost systemd[1]: Started Network Manager. Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.2567] manager[0x55dc9b628090]: monitoring kernel firmware directory '/lib/firmware'. Dec 5 01:51:45 localhost systemd[1]: Starting Network Manager Wait Online... Dec 5 01:51:45 localhost systemd[1]: Starting Hostname Service... Dec 5 01:51:45 localhost systemd[1]: Started Hostname Service. Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3421] hostname: hostname: using hostnamed Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3422] hostname: static hostname changed from (none) to "np0005546419.novalocal" Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3426] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3431] manager[0x55dc9b628090]: rfkill: Wi-Fi hardware radio set enabled Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3431] manager[0x55dc9b628090]: rfkill: WWAN hardware radio set enabled Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3484] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3485] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3488] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3491] manager: Networking is enabled by state file Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3500] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3501] settings: Loaded settings plugin: keyfile (internal) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3579] dhcp: init: Using DHCP client 'internal' Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3584] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3603] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3620] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3643] device (lo): Activation: starting connection 'lo' (17dc3b4d-4509-4175-9eed-28194dbd11b8) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3659] device (eth0): carrier: link connected Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3666] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3683] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3683] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3702] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3730] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3748] device (eth1): carrier: link connected Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3755] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3774] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c23ab6f1-f5f7-3703-98f4-3dea3c2f7709) (indicated) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3774] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3795] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3814] device (eth1): Activation: starting connection 'Wired connection 1' (c23ab6f1-f5f7-3703-98f4-3dea3c2f7709) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3868] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3871] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3874] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3878] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3885] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3890] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3894] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3978] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3985] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3989] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.3998] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4001] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4018] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4025] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4031] device (lo): Activation: successful, device activated. Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4039] dhcp4 (eth0): state changed new lease, address=38.102.83.210 Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4043] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4148] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4189] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4191] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4198] manager: NetworkManager state is now CONNECTED_SITE Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4205] device (eth0): Activation: successful, device activated. Dec 5 01:51:45 localhost NetworkManager[5960]: [1764917505.4210] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 5 01:51:45 localhost python3[6018]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-be73-2a20-000000000120-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 01:51:55 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 5 01:52:15 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 5 01:52:30 localhost NetworkManager[5960]: [1764917550.8261] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 5 01:52:30 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 5 01:52:30 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 5 01:52:30 localhost NetworkManager[5960]: [1764917550.8489] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 5 01:52:30 localhost NetworkManager[5960]: [1764917550.8494] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 5 01:52:30 localhost NetworkManager[5960]: [1764917550.8515] device (eth1): Activation: successful, device activated. Dec 5 01:52:30 localhost NetworkManager[5960]: [1764917550.8526] manager: startup complete Dec 5 01:52:30 localhost systemd[1]: Finished Network Manager Wait Online. Dec 5 01:52:40 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 5 01:52:45 localhost systemd[1]: session-3.scope: Deactivated successfully. Dec 5 01:52:45 localhost systemd[1]: session-3.scope: Consumed 1.445s CPU time. Dec 5 01:52:45 localhost systemd-logind[760]: Session 3 logged out. Waiting for processes to exit. Dec 5 01:52:45 localhost systemd-logind[760]: Removed session 3. Dec 5 01:54:04 localhost sshd[6050]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:54:04 localhost systemd-logind[760]: New session 4 of user zuul. Dec 5 01:54:04 localhost systemd[1]: Started Session 4 of User zuul. Dec 5 01:54:04 localhost python3[6101]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 01:54:04 localhost python3[6144]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764917644.2490191-628-64690131639938/source _original_basename=tmptxfpq2mr follow=False checksum=e3566e5142abb120e69cfbdd458d4460af3b26ce backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 01:54:09 localhost systemd[1]: session-4.scope: Deactivated successfully. Dec 5 01:54:09 localhost systemd-logind[760]: Session 4 logged out. Waiting for processes to exit. Dec 5 01:54:09 localhost systemd-logind[760]: Removed session 4. Dec 5 01:54:40 localhost sshd[6160]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:54:42 localhost sshd[6162]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:54:44 localhost sshd[6164]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:54:46 localhost sshd[6166]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:54:48 localhost sshd[6168]: main: sshd: ssh-rsa algorithm is disabled Dec 5 01:56:29 localhost systemd[1]: Starting Cleanup of Temporary Directories... Dec 5 01:56:29 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Dec 5 01:56:29 localhost systemd[1]: Finished Cleanup of Temporary Directories. Dec 5 01:56:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Dec 5 02:01:50 localhost sshd[6190]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:01:50 localhost systemd-logind[760]: New session 5 of user zuul. Dec 5 02:01:50 localhost systemd[1]: Started Session 5 of User zuul. Dec 5 02:01:50 localhost python3[6209]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ac58-428a-000000001d10-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:02:02 localhost python3[6228]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:02:02 localhost python3[6244]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:02:02 localhost python3[6260]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:02:02 localhost python3[6276]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:02:03 localhost python3[6292]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:02:05 localhost python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:02:05 localhost python3[6383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764918124.7423964-642-172914205328837/source _original_basename=tmpiovknory follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:02:07 localhost python3[6413]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 02:02:07 localhost systemd[1]: Reloading. Dec 5 02:02:07 localhost systemd-rc-local-generator[6434]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:02:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:02:08 localhost python3[6461]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Dec 5 02:02:09 localhost python3[6477]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:02:10 localhost python3[6495]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:02:10 localhost python3[6513]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:02:10 localhost python3[6531]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:02:11 localhost python3[6548]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-ac58-428a-000000001d17-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:02:22 localhost python3[6568]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:02:25 localhost systemd[1]: session-5.scope: Deactivated successfully. Dec 5 02:02:25 localhost systemd[1]: session-5.scope: Consumed 4.109s CPU time. Dec 5 02:02:25 localhost systemd-logind[760]: Session 5 logged out. Waiting for processes to exit. Dec 5 02:02:25 localhost systemd-logind[760]: Removed session 5. Dec 5 02:03:49 localhost sshd[6575]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:03:49 localhost systemd-logind[760]: New session 6 of user zuul. Dec 5 02:03:49 localhost systemd[1]: Started Session 6 of User zuul. Dec 5 02:03:50 localhost systemd[1]: Starting RHSM dbus service... Dec 5 02:03:50 localhost systemd[1]: Started RHSM dbus service. Dec 5 02:03:50 localhost rhsm-service[6599]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 5 02:03:50 localhost rhsm-service[6599]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 5 02:03:50 localhost rhsm-service[6599]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 5 02:03:50 localhost rhsm-service[6599]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 5 02:03:54 localhost rhsm-service[6599]: INFO [subscription_manager.managerlib:90] Consumer created: np0005546419.novalocal (dce74b25-fc83-49c9-a74a-3da4f3fcff46) Dec 5 02:03:54 localhost subscription-manager[6599]: Registered system with identity: dce74b25-fc83-49c9-a74a-3da4f3fcff46 Dec 5 02:03:56 localhost rhsm-service[6599]: INFO [subscription_manager.entcertlib:131] certs updated: Dec 5 02:03:56 localhost rhsm-service[6599]: Total updates: 1 Dec 5 02:03:56 localhost rhsm-service[6599]: Found (local) serial# [] Dec 5 02:03:56 localhost rhsm-service[6599]: Expected (UEP) serial# [4934762706319481090] Dec 5 02:03:56 localhost rhsm-service[6599]: Added (new) Dec 5 02:03:56 localhost rhsm-service[6599]: [sn:4934762706319481090 ( Content Access,) @ /etc/pki/entitlement/4934762706319481090.pem] Dec 5 02:03:56 localhost rhsm-service[6599]: Deleted (rogue): Dec 5 02:03:56 localhost rhsm-service[6599]: Dec 5 02:03:56 localhost subscription-manager[6599]: Added subscription for 'Content Access' contract 'None' Dec 5 02:03:56 localhost subscription-manager[6599]: Added subscription for product ' Content Access' Dec 5 02:03:57 localhost rhsm-service[6599]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 5 02:03:57 localhost rhsm-service[6599]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 5 02:03:57 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:03:57 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:03:57 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:03:58 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:03:58 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:04:06 localhost python3[6690]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-91be-d6a9-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:04:07 localhost python3[6709]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:04:39 localhost setsebool[6784]: The virt_use_nfs policy boolean was changed to 1 by root Dec 5 02:04:39 localhost setsebool[6784]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Dec 5 02:04:49 localhost kernel: SELinux: Converting 407 SID table entries... Dec 5 02:04:49 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 02:04:49 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 02:04:49 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 02:04:49 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 02:04:49 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 02:04:49 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 02:04:49 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 02:05:02 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 5 02:05:02 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:05:02 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 02:05:02 localhost systemd[1]: Reloading. Dec 5 02:05:02 localhost systemd-rc-local-generator[7630]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:05:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:05:02 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 02:05:04 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:05:10 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 02:05:10 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 02:05:10 localhost systemd[1]: man-db-cache-update.service: Consumed 9.628s CPU time. Dec 5 02:05:10 localhost systemd[1]: run-rb1c575f3b62f486c8b0d6d0f3c86573d.service: Deactivated successfully. Dec 5 02:05:54 localhost podman[18379]: 2025-12-05 07:05:54.491310308 +0000 UTC m=+0.093459909 system refresh Dec 5 02:05:55 localhost systemd[4181]: Starting D-Bus User Message Bus... Dec 5 02:05:55 localhost dbus-broker-launch[18436]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 5 02:05:55 localhost dbus-broker-launch[18436]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 5 02:05:55 localhost systemd[4181]: Started D-Bus User Message Bus. Dec 5 02:05:55 localhost journal[18436]: Ready Dec 5 02:05:55 localhost systemd[4181]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 5 02:05:55 localhost systemd[4181]: Created slice Slice /user. Dec 5 02:05:55 localhost systemd[4181]: podman-18420.scope: unit configures an IP firewall, but not running as root. Dec 5 02:05:55 localhost systemd[4181]: (This warning is only shown for the first unit using IP firewalling.) Dec 5 02:05:55 localhost systemd[4181]: Started podman-18420.scope. Dec 5 02:05:55 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:05:55 localhost systemd[4181]: Started podman-pause-f63905dc.scope. Dec 5 02:05:59 localhost systemd[1]: session-6.scope: Deactivated successfully. Dec 5 02:05:59 localhost systemd[1]: session-6.scope: Consumed 52.667s CPU time. Dec 5 02:05:59 localhost systemd-logind[760]: Session 6 logged out. Waiting for processes to exit. Dec 5 02:05:59 localhost systemd-logind[760]: Removed session 6. Dec 5 02:06:15 localhost sshd[18440]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:06:15 localhost sshd[18441]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:06:15 localhost sshd[18442]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:06:15 localhost sshd[18444]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:06:15 localhost sshd[18443]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:06:20 localhost sshd[18450]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:06:20 localhost systemd-logind[760]: New session 7 of user zuul. Dec 5 02:06:20 localhost systemd[1]: Started Session 7 of User zuul. Dec 5 02:06:21 localhost python3[18467]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMItrNJD3Qo5RZ9GVEvrDsRHCNoqv/QCdFAerIbUnRZyqMIrTCHiUzK01hguMY3G31c8ICa3d4AuOJ+Y8G23vfU= zuul@np0005546412.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:06:22 localhost python3[18483]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMItrNJD3Qo5RZ9GVEvrDsRHCNoqv/QCdFAerIbUnRZyqMIrTCHiUzK01hguMY3G31c8ICa3d4AuOJ+Y8G23vfU= zuul@np0005546412.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:06:23 localhost systemd[1]: session-7.scope: Deactivated successfully. Dec 5 02:06:23 localhost systemd-logind[760]: Session 7 logged out. Waiting for processes to exit. Dec 5 02:06:23 localhost systemd-logind[760]: Removed session 7. Dec 5 02:08:06 localhost sshd[18485]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:08:06 localhost systemd-logind[760]: New session 8 of user zuul. Dec 5 02:08:06 localhost systemd[1]: Started Session 8 of User zuul. Dec 5 02:08:07 localhost python3[18504]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:08:08 localhost python3[18520]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546419.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 5 02:08:09 localhost python3[18570]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:08:10 localhost python3[18613]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764918489.5248158-132-227057140710790/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa follow=False checksum=279822aa185303a1622fd64f0c6305b91ca04c54 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:08:11 localhost python3[18675]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:08:11 localhost python3[18718]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764918491.1923869-220-42236492749932/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=cb8438a2b38642fd8d5aa4c34b846ebc_id_rsa.pub follow=False checksum=a810c4b730db53312f48868578c3039315af7db6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:08:13 localhost python3[18748]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:08:15 localhost python3[18794]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:08:15 localhost python3[18810]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpud9o0uuv recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:08:16 localhost python3[18870]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:08:16 localhost python3[18886]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpiobebz90 recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:08:18 localhost python3[18946]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:08:18 localhost python3[18962]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpfyof2m2k recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:08:19 localhost systemd[1]: session-8.scope: Deactivated successfully. Dec 5 02:08:19 localhost systemd[1]: session-8.scope: Consumed 3.484s CPU time. Dec 5 02:08:19 localhost systemd-logind[760]: Session 8 logged out. Waiting for processes to exit. Dec 5 02:08:19 localhost systemd-logind[760]: Removed session 8. Dec 5 02:10:38 localhost sshd[18978]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:10:38 localhost systemd-logind[760]: New session 9 of user zuul. Dec 5 02:10:38 localhost systemd[1]: Started Session 9 of User zuul. Dec 5 02:10:38 localhost python3[19024]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:14:29 localhost systemd[1]: Starting dnf makecache... Dec 5 02:14:29 localhost dnf[19027]: Updating Subscription Management repositories. Dec 5 02:14:30 localhost dnf[19027]: Failed determining last makecache time. Dec 5 02:14:31 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 41 kB/s | 4.1 kB 00:00 Dec 5 02:14:31 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 53 kB/s | 4.5 kB 00:00 Dec 5 02:14:31 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 52 kB/s | 4.1 kB 00:00 Dec 5 02:14:31 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 24 kB/s | 4.5 kB 00:00 Dec 5 02:14:32 localhost dnf[19027]: Metadata cache created. Dec 5 02:14:32 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 5 02:14:32 localhost systemd[1]: Finished dnf makecache. Dec 5 02:14:32 localhost systemd[1]: dnf-makecache.service: Consumed 2.585s CPU time. Dec 5 02:15:38 localhost systemd[1]: session-9.scope: Deactivated successfully. Dec 5 02:15:38 localhost systemd-logind[760]: Session 9 logged out. Waiting for processes to exit. Dec 5 02:15:38 localhost systemd-logind[760]: Removed session 9. Dec 5 02:18:55 localhost sshd[19034]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:18:57 localhost sshd[19036]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:19:00 localhost sshd[19038]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:19:02 localhost sshd[19040]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:19:04 localhost sshd[19042]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:23:22 localhost sshd[19047]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:23:22 localhost systemd-logind[760]: New session 10 of user zuul. Dec 5 02:23:22 localhost systemd[1]: Started Session 10 of User zuul. Dec 5 02:23:22 localhost python3[19064]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:23:24 localhost python3[19084]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000000d-1-overcloudnovacompute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:23:29 localhost python3[19103]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Dec 5 02:23:32 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:23:32 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:24:26 localhost python3[19260]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Dec 5 02:24:29 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:24:29 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:24:38 localhost python3[19460]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Dec 5 02:24:41 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:24:41 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:24:46 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:24:46 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:09 localhost python3[19795]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 5 02:25:12 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:12 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:17 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:18 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:40 localhost python3[20192]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 5 02:25:42 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:43 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:48 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:25:48 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:26:12 localhost python3[20531]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000013-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:26:18 localhost python3[20550]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:26:40 localhost kernel: SELinux: Converting 488 SID table entries... Dec 5 02:26:40 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 02:26:40 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 02:26:40 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 02:26:40 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 02:26:40 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 02:26:40 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 02:26:40 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 02:26:40 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=4 res=1 Dec 5 02:26:40 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Dec 5 02:26:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:26:43 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 02:26:43 localhost systemd[1]: Reloading. Dec 5 02:26:43 localhost systemd-rc-local-generator[21212]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:26:43 localhost systemd-sysv-generator[21216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:26:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:26:43 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 02:26:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 02:26:44 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 02:26:44 localhost systemd[1]: run-r6fd3ad51e66f4f7caa5c607beff3400a.service: Deactivated successfully. Dec 5 02:26:45 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:26:45 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 02:27:12 localhost python3[21759]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000015-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:27:45 localhost python3[21779]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:27:46 localhost python3[21827]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:27:46 localhost python3[21870]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764919665.8746347-290-256383282043538/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=3358dfc6c6ce646155135d0cad900026cb34ba08 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:27:48 localhost python3[21900]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 5 02:27:49 localhost systemd-journald[618]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Dec 5 02:27:49 localhost systemd-journald[618]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 02:27:49 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 02:27:49 localhost python3[21921]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 5 02:27:49 localhost python3[21941]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 5 02:27:49 localhost python3[21961]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 5 02:27:50 localhost python3[21981]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 5 02:27:51 localhost python3[22001]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 02:27:51 localhost systemd[1]: Starting LSB: Bring up/down networking... Dec 5 02:27:51 localhost network[22004]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 02:27:51 localhost network[22015]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 02:27:51 localhost network[22004]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Dec 5 02:27:51 localhost network[22016]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:27:51 localhost network[22004]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Dec 5 02:27:51 localhost network[22017]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 02:27:51 localhost NetworkManager[5960]: [1764919671.6005] audit: op="connections-reload" pid=22045 uid=0 result="success" Dec 5 02:27:51 localhost network[22004]: Bringing up loopback interface: [ OK ] Dec 5 02:27:51 localhost NetworkManager[5960]: [1764919671.8062] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22133 uid=0 result="success" Dec 5 02:27:51 localhost network[22004]: Bringing up interface eth0: [ OK ] Dec 5 02:27:51 localhost systemd[1]: Started LSB: Bring up/down networking. Dec 5 02:27:52 localhost python3[22174]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 02:27:52 localhost systemd[1]: Starting Open vSwitch Database Unit... Dec 5 02:27:52 localhost chown[22178]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Dec 5 02:27:52 localhost ovs-ctl[22183]: /etc/openvswitch/conf.db does not exist ... (warning). Dec 5 02:27:52 localhost ovs-ctl[22183]: Creating empty database /etc/openvswitch/conf.db [ OK ] Dec 5 02:27:52 localhost ovs-ctl[22183]: Starting ovsdb-server [ OK ] Dec 5 02:27:52 localhost ovs-vsctl[22233]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Dec 5 02:27:52 localhost ovs-vsctl[22253]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"22ecc443-b9ab-4c88-a730-5598bd07d403\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Dec 5 02:27:52 localhost ovs-ctl[22183]: Configuring Open vSwitch system IDs [ OK ] Dec 5 02:27:52 localhost ovs-ctl[22183]: Enabling remote OVSDB managers [ OK ] Dec 5 02:27:52 localhost ovs-vsctl[22259]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005546419.novalocal Dec 5 02:27:52 localhost systemd[1]: Started Open vSwitch Database Unit. Dec 5 02:27:52 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Dec 5 02:27:52 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Dec 5 02:27:52 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Dec 5 02:27:52 localhost kernel: openvswitch: Open vSwitch switching datapath Dec 5 02:27:52 localhost ovs-ctl[22303]: Inserting openvswitch module [ OK ] Dec 5 02:27:52 localhost ovs-ctl[22272]: Starting ovs-vswitchd [ OK ] Dec 5 02:27:52 localhost ovs-vsctl[22321]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005546419.novalocal Dec 5 02:27:52 localhost ovs-ctl[22272]: Enabling remote OVSDB managers [ OK ] Dec 5 02:27:52 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Dec 5 02:27:52 localhost systemd[1]: Starting Open vSwitch... Dec 5 02:27:52 localhost systemd[1]: Finished Open vSwitch. Dec 5 02:27:55 localhost python3[22339]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000001a-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:27:56 localhost NetworkManager[5960]: [1764919676.3667] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22497 uid=0 result="success" Dec 5 02:27:56 localhost ifup[22498]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:27:56 localhost ifup[22499]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:27:56 localhost ifup[22500]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:27:56 localhost NetworkManager[5960]: [1764919676.3900] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22506 uid=0 result="success" Dec 5 02:27:56 localhost ovs-vsctl[22508]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:c7:7c:3e -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Dec 5 02:27:56 localhost kernel: device ovs-system entered promiscuous mode Dec 5 02:27:56 localhost NetworkManager[5960]: [1764919676.4115] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Dec 5 02:27:56 localhost systemd-udevd[22510]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:27:56 localhost kernel: Timeout policy base is empty Dec 5 02:27:56 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Dec 5 02:27:56 localhost systemd-udevd[22521]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:27:56 localhost kernel: device br-ex entered promiscuous mode Dec 5 02:27:56 localhost NetworkManager[5960]: [1764919676.4562] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Dec 5 02:27:56 localhost NetworkManager[5960]: [1764919676.4839] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22535 uid=0 result="success" Dec 5 02:27:56 localhost NetworkManager[5960]: [1764919676.5038] device (br-ex): carrier: link connected Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.5559] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22564 uid=0 result="success" Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.6027] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22579 uid=0 result="success" Dec 5 02:27:59 localhost NET[22604]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.6912] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.7022] dhcp4 (eth1): canceled DHCP transaction Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.7023] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.7023] dhcp4 (eth1): state changed no lease Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.7088] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22613 uid=0 result="success" Dec 5 02:27:59 localhost ifup[22614]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:27:59 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 5 02:27:59 localhost ifup[22616]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:27:59 localhost ifup[22617]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:27:59 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.7495] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22631 uid=0 result="success" Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.7970] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22641 uid=0 result="success" Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.8045] device (eth1): carrier: link connected Dec 5 02:27:59 localhost NetworkManager[5960]: [1764919679.8271] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22650 uid=0 result="success" Dec 5 02:27:59 localhost ipv6_wait_tentative[22662]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 5 02:28:00 localhost ipv6_wait_tentative[22667]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 5 02:28:01 localhost NetworkManager[5960]: [1764919681.8977] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22676 uid=0 result="success" Dec 5 02:28:01 localhost ovs-vsctl[22691]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Dec 5 02:28:01 localhost kernel: device eth1 entered promiscuous mode Dec 5 02:28:01 localhost NetworkManager[5960]: [1764919681.9688] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22699 uid=0 result="success" Dec 5 02:28:01 localhost ifup[22700]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:01 localhost ifup[22701]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:01 localhost ifup[22702]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:02 localhost NetworkManager[5960]: [1764919682.0001] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22708 uid=0 result="success" Dec 5 02:28:02 localhost NetworkManager[5960]: [1764919682.0450] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22718 uid=0 result="success" Dec 5 02:28:02 localhost ifup[22719]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:02 localhost ifup[22720]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:02 localhost ifup[22721]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:02 localhost NetworkManager[5960]: [1764919682.0782] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22727 uid=0 result="success" Dec 5 02:28:02 localhost ovs-vsctl[22730]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 5 02:28:02 localhost kernel: device vlan44 entered promiscuous mode Dec 5 02:28:02 localhost NetworkManager[5960]: [1764919682.1187] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Dec 5 02:28:02 localhost systemd-udevd[22732]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:28:02 localhost NetworkManager[5960]: [1764919682.1450] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22741 uid=0 result="success" Dec 5 02:28:02 localhost NetworkManager[5960]: [1764919682.1677] device (vlan44): carrier: link connected Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.2202] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22770 uid=0 result="success" Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.2681] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22785 uid=0 result="success" Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.3244] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22806 uid=0 result="success" Dec 5 02:28:05 localhost ifup[22807]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:05 localhost ifup[22808]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:05 localhost ifup[22809]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.3490] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22815 uid=0 result="success" Dec 5 02:28:05 localhost ovs-vsctl[22818]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 5 02:28:05 localhost systemd-udevd[22820]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:28:05 localhost kernel: device vlan23 entered promiscuous mode Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.3869] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.4103] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22830 uid=0 result="success" Dec 5 02:28:05 localhost NetworkManager[5960]: [1764919685.4303] device (vlan23): carrier: link connected Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.4824] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22860 uid=0 result="success" Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.5295] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22875 uid=0 result="success" Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.5902] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22896 uid=0 result="success" Dec 5 02:28:08 localhost ifup[22897]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:08 localhost ifup[22898]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:08 localhost ifup[22899]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.6210] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22905 uid=0 result="success" Dec 5 02:28:08 localhost ovs-vsctl[22908]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 5 02:28:08 localhost kernel: device vlan20 entered promiscuous mode Dec 5 02:28:08 localhost systemd-udevd[22910]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.6594] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.6847] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22920 uid=0 result="success" Dec 5 02:28:08 localhost NetworkManager[5960]: [1764919688.7048] device (vlan20): carrier: link connected Dec 5 02:28:09 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 5 02:28:11 localhost NetworkManager[5960]: [1764919691.7645] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22950 uid=0 result="success" Dec 5 02:28:11 localhost NetworkManager[5960]: [1764919691.8159] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22965 uid=0 result="success" Dec 5 02:28:11 localhost NetworkManager[5960]: [1764919691.8806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22986 uid=0 result="success" Dec 5 02:28:11 localhost ifup[22987]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:11 localhost ifup[22988]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:11 localhost ifup[22989]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:11 localhost NetworkManager[5960]: [1764919691.9148] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22995 uid=0 result="success" Dec 5 02:28:11 localhost ovs-vsctl[22998]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 5 02:28:11 localhost kernel: device vlan22 entered promiscuous mode Dec 5 02:28:11 localhost NetworkManager[5960]: [1764919691.9528] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Dec 5 02:28:11 localhost systemd-udevd[23000]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:28:11 localhost NetworkManager[5960]: [1764919691.9794] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23010 uid=0 result="success" Dec 5 02:28:12 localhost NetworkManager[5960]: [1764919692.0014] device (vlan22): carrier: link connected Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.0591] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23040 uid=0 result="success" Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.1058] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23055 uid=0 result="success" Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.1639] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23076 uid=0 result="success" Dec 5 02:28:15 localhost ifup[23077]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:15 localhost ifup[23078]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:15 localhost ifup[23079]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.1956] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23085 uid=0 result="success" Dec 5 02:28:15 localhost ovs-vsctl[23088]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.2391] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Dec 5 02:28:15 localhost kernel: device vlan21 entered promiscuous mode Dec 5 02:28:15 localhost systemd-udevd[23090]: Network interface NamePolicy= disabled on kernel command line. Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.2639] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23100 uid=0 result="success" Dec 5 02:28:15 localhost NetworkManager[5960]: [1764919695.2835] device (vlan21): carrier: link connected Dec 5 02:28:18 localhost NetworkManager[5960]: [1764919698.3348] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23130 uid=0 result="success" Dec 5 02:28:18 localhost NetworkManager[5960]: [1764919698.3796] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23145 uid=0 result="success" Dec 5 02:28:18 localhost NetworkManager[5960]: [1764919698.4420] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23166 uid=0 result="success" Dec 5 02:28:18 localhost ifup[23167]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:18 localhost ifup[23168]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:18 localhost ifup[23169]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:18 localhost NetworkManager[5960]: [1764919698.4771] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23175 uid=0 result="success" Dec 5 02:28:18 localhost ovs-vsctl[23178]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 5 02:28:18 localhost NetworkManager[5960]: [1764919698.5321] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23185 uid=0 result="success" Dec 5 02:28:19 localhost NetworkManager[5960]: [1764919699.5899] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23212 uid=0 result="success" Dec 5 02:28:19 localhost NetworkManager[5960]: [1764919699.6353] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23227 uid=0 result="success" Dec 5 02:28:19 localhost NetworkManager[5960]: [1764919699.6953] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23248 uid=0 result="success" Dec 5 02:28:19 localhost ifup[23249]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:19 localhost ifup[23250]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:19 localhost ifup[23251]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:19 localhost NetworkManager[5960]: [1764919699.7278] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23257 uid=0 result="success" Dec 5 02:28:19 localhost ovs-vsctl[23260]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 5 02:28:19 localhost NetworkManager[5960]: [1764919699.7851] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23267 uid=0 result="success" Dec 5 02:28:20 localhost NetworkManager[5960]: [1764919700.8482] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23295 uid=0 result="success" Dec 5 02:28:20 localhost NetworkManager[5960]: [1764919700.8930] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23310 uid=0 result="success" Dec 5 02:28:20 localhost NetworkManager[5960]: [1764919700.9450] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23331 uid=0 result="success" Dec 5 02:28:20 localhost ifup[23332]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:20 localhost ifup[23333]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:20 localhost ifup[23334]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:20 localhost NetworkManager[5960]: [1764919700.9777] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23340 uid=0 result="success" Dec 5 02:28:21 localhost ovs-vsctl[23343]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 5 02:28:21 localhost NetworkManager[5960]: [1764919701.0343] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23350 uid=0 result="success" Dec 5 02:28:22 localhost NetworkManager[5960]: [1764919702.0977] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23378 uid=0 result="success" Dec 5 02:28:22 localhost NetworkManager[5960]: [1764919702.1446] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23393 uid=0 result="success" Dec 5 02:28:22 localhost NetworkManager[5960]: [1764919702.1957] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23414 uid=0 result="success" Dec 5 02:28:22 localhost ifup[23415]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:22 localhost ifup[23416]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:22 localhost ifup[23417]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:22 localhost NetworkManager[5960]: [1764919702.2277] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23423 uid=0 result="success" Dec 5 02:28:22 localhost ovs-vsctl[23426]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 5 02:28:22 localhost NetworkManager[5960]: [1764919702.2814] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23433 uid=0 result="success" Dec 5 02:28:23 localhost NetworkManager[5960]: [1764919703.3379] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23461 uid=0 result="success" Dec 5 02:28:23 localhost NetworkManager[5960]: [1764919703.3867] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23476 uid=0 result="success" Dec 5 02:28:23 localhost NetworkManager[5960]: [1764919703.4451] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23497 uid=0 result="success" Dec 5 02:28:23 localhost ifup[23498]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 5 02:28:23 localhost ifup[23499]: 'network-scripts' will be removed from distribution in near future. Dec 5 02:28:23 localhost ifup[23500]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 5 02:28:23 localhost NetworkManager[5960]: [1764919703.4768] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23506 uid=0 result="success" Dec 5 02:28:23 localhost ovs-vsctl[23509]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 5 02:28:23 localhost NetworkManager[5960]: [1764919703.5386] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23516 uid=0 result="success" Dec 5 02:28:24 localhost NetworkManager[5960]: [1764919704.5969] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23544 uid=0 result="success" Dec 5 02:28:24 localhost NetworkManager[5960]: [1764919704.6449] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23559 uid=0 result="success" Dec 5 02:29:17 localhost python3[23591]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-00000000001b-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:29:23 localhost python3[23610]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:29:24 localhost python3[23626]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:29:25 localhost python3[23640]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:29:26 localhost python3[23656]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 5 02:29:27 localhost python3[23670]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Dec 5 02:29:27 localhost python3[23685]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005546419.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000022-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:29:28 localhost python3[23705]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-6324-9d4a-000000000023-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:29:28 localhost systemd[1]: Starting Hostname Service... Dec 5 02:29:28 localhost systemd[1]: Started Hostname Service. Dec 5 02:29:28 localhost systemd-hostnamed[23709]: Hostname set to (static) Dec 5 02:29:28 localhost NetworkManager[5960]: [1764919768.8533] hostname: static hostname changed from "np0005546419.novalocal" to "np0005546419.localdomain" Dec 5 02:29:28 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 5 02:29:28 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 5 02:29:30 localhost systemd[1]: session-10.scope: Deactivated successfully. Dec 5 02:29:30 localhost systemd[1]: session-10.scope: Consumed 1min 46.049s CPU time. Dec 5 02:29:30 localhost systemd-logind[760]: Session 10 logged out. Waiting for processes to exit. Dec 5 02:29:30 localhost systemd-logind[760]: Removed session 10. Dec 5 02:29:32 localhost sshd[23720]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:29:32 localhost systemd-logind[760]: New session 11 of user zuul. Dec 5 02:29:32 localhost systemd[1]: Started Session 11 of User zuul. Dec 5 02:29:33 localhost python3[23737]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 5 02:29:35 localhost systemd[1]: session-11.scope: Deactivated successfully. Dec 5 02:29:35 localhost systemd-logind[760]: Session 11 logged out. Waiting for processes to exit. Dec 5 02:29:35 localhost systemd-logind[760]: Removed session 11. Dec 5 02:29:38 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 5 02:29:58 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 5 02:30:13 localhost sshd[23741]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:30:13 localhost systemd-logind[760]: New session 12 of user zuul. Dec 5 02:30:13 localhost systemd[1]: Started Session 12 of User zuul. Dec 5 02:30:13 localhost python3[23760]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:30:17 localhost systemd[1]: Reloading. Dec 5 02:30:17 localhost systemd-rc-local-generator[23799]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:30:17 localhost systemd-sysv-generator[23805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:30:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:30:17 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Dec 5 02:30:17 localhost systemd[1]: Reloading. Dec 5 02:30:17 localhost systemd-rc-local-generator[23839]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:30:17 localhost systemd-sysv-generator[23842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:30:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:30:17 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Dec 5 02:30:18 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Dec 5 02:30:18 localhost systemd[1]: Reloading. Dec 5 02:30:18 localhost systemd-sysv-generator[23882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:30:18 localhost systemd-rc-local-generator[23878]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:30:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:30:18 localhost systemd[1]: Listening on LVM2 poll daemon socket. Dec 5 02:30:18 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:30:18 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 02:30:18 localhost systemd[1]: Reloading. Dec 5 02:30:18 localhost systemd-sysv-generator[23944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:30:18 localhost systemd-rc-local-generator[23940]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:30:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:30:18 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 02:30:18 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:30:19 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 02:30:19 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 02:30:19 localhost systemd[1]: run-r134526be365a4f0cbcd873169ba3acb0.service: Deactivated successfully. Dec 5 02:30:19 localhost systemd[1]: run-r0b5d89f0f75648d1888870be35764a68.service: Deactivated successfully. Dec 5 02:31:10 localhost sshd[24532]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:31:19 localhost systemd[1]: session-12.scope: Deactivated successfully. Dec 5 02:31:19 localhost systemd[1]: session-12.scope: Consumed 4.657s CPU time. Dec 5 02:31:19 localhost systemd-logind[760]: Session 12 logged out. Waiting for processes to exit. Dec 5 02:31:19 localhost systemd-logind[760]: Removed session 12. Dec 5 02:43:08 localhost sshd[24539]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:43:10 localhost sshd[24541]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:43:12 localhost sshd[24543]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:43:14 localhost sshd[24545]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:43:16 localhost sshd[24547]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:47:18 localhost sshd[24552]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:47:18 localhost systemd-logind[760]: New session 13 of user zuul. Dec 5 02:47:18 localhost systemd[1]: Started Session 13 of User zuul. Dec 5 02:47:18 localhost python3[24600]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 02:47:20 localhost python3[24687]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:47:24 localhost python3[24704]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:47:24 localhost python3[24720]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:47:24 localhost kernel: loop: module loaded Dec 5 02:47:24 localhost kernel: loop3: detected capacity change from 0 to 14680064 Dec 5 02:47:25 localhost python3[24745]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:47:25 localhost lvm[24748]: PV /dev/loop3 not used. Dec 5 02:47:25 localhost lvm[24750]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 5 02:47:25 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Dec 5 02:47:25 localhost lvm[24759]: 1 logical volume(s) in volume group "ceph_vg0" now active Dec 5 02:47:25 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Dec 5 02:47:26 localhost python3[24807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:47:26 localhost python3[24850]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920845.8767185-55038-261293106046638/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:27 localhost python3[24880]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 02:47:27 localhost systemd[1]: Reloading. Dec 5 02:47:27 localhost systemd-rc-local-generator[24907]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:47:27 localhost systemd-sysv-generator[24912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:47:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:47:27 localhost systemd[1]: Starting Ceph OSD losetup... Dec 5 02:47:27 localhost bash[24921]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img) Dec 5 02:47:27 localhost systemd[1]: Finished Ceph OSD losetup. Dec 5 02:47:27 localhost lvm[24922]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 5 02:47:27 localhost lvm[24922]: VG ceph_vg0 finished Dec 5 02:47:28 localhost python3[24939]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:47:31 localhost python3[24956]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:47:32 localhost python3[24972]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:47:32 localhost kernel: loop4: detected capacity change from 0 to 14680064 Dec 5 02:47:32 localhost python3[24994]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:47:32 localhost lvm[24997]: PV /dev/loop4 not used. Dec 5 02:47:32 localhost lvm[24999]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 5 02:47:32 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Dec 5 02:47:32 localhost lvm[25006]: 1 logical volume(s) in volume group "ceph_vg1" now active Dec 5 02:47:32 localhost lvm[25010]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 5 02:47:32 localhost lvm[25010]: VG ceph_vg1 finished Dec 5 02:47:32 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Dec 5 02:47:33 localhost python3[25059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:47:33 localhost python3[25102]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920853.1367583-55225-193491035737457/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:34 localhost python3[25132]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 02:47:34 localhost systemd[1]: Reloading. Dec 5 02:47:34 localhost systemd-rc-local-generator[25162]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:47:34 localhost systemd-sysv-generator[25165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:47:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:47:34 localhost systemd[1]: Starting Ceph OSD losetup... Dec 5 02:47:34 localhost bash[25173]: /dev/loop4: [64516]:8399529 (/var/lib/ceph-osd-1.img) Dec 5 02:47:34 localhost systemd[1]: Finished Ceph OSD losetup. Dec 5 02:47:34 localhost lvm[25174]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 5 02:47:34 localhost lvm[25174]: VG ceph_vg1 finished Dec 5 02:47:43 localhost python3[25219]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 5 02:47:44 localhost python3[25239]: ansible-hostname Invoked with name=np0005546419.localdomain use=None Dec 5 02:47:44 localhost systemd[1]: Starting Hostname Service... Dec 5 02:47:44 localhost systemd[1]: Started Hostname Service. Dec 5 02:47:47 localhost python3[25262]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 5 02:47:47 localhost python3[25310]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.tdwoi73ftmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:48 localhost python3[25340]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.tdwoi73ftmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:48 localhost python3[25356]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.tdwoi73ftmphosts insertbefore=BOF block=192.168.122.106 np0005546419.localdomain np0005546419#012192.168.122.106 np0005546419.ctlplane.localdomain np0005546419.ctlplane#012192.168.122.107 np0005546420.localdomain np0005546420#012192.168.122.107 np0005546420.ctlplane.localdomain np0005546420.ctlplane#012192.168.122.108 np0005546421.localdomain np0005546421#012192.168.122.108 np0005546421.ctlplane.localdomain np0005546421.ctlplane#012192.168.122.103 np0005546415.localdomain np0005546415#012192.168.122.103 np0005546415.ctlplane.localdomain np0005546415.ctlplane#012192.168.122.104 np0005546416.localdomain np0005546416#012192.168.122.104 np0005546416.ctlplane.localdomain np0005546416.ctlplane#012192.168.122.105 np0005546418.localdomain np0005546418#012192.168.122.105 np0005546418.ctlplane.localdomain np0005546418.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:49 localhost python3[25372]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.tdwoi73ftmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:47:49 localhost python3[25389]: ansible-file Invoked with path=/tmp/ansible.tdwoi73ftmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:52 localhost python3[25405]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:47:52 localhost python3[25423]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:47:56 localhost python3[25472]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:47:57 localhost python3[25517]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920876.3105242-56049-244926600832028/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:47:58 localhost python3[25547]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 02:48:00 localhost python3[25565]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 02:48:00 localhost chronyd[765]: chronyd exiting Dec 5 02:48:00 localhost systemd[1]: Stopping NTP client/server... Dec 5 02:48:00 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 5 02:48:00 localhost systemd[1]: Stopped NTP client/server. Dec 5 02:48:00 localhost systemd[1]: chronyd.service: Consumed 122ms CPU time, read 1.9M from disk, written 4.0K to disk. Dec 5 02:48:00 localhost systemd[1]: Starting NTP client/server... Dec 5 02:48:00 localhost chronyd[25572]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 5 02:48:00 localhost chronyd[25572]: Frequency -30.154 +/- 0.143 ppm read from /var/lib/chrony/drift Dec 5 02:48:00 localhost chronyd[25572]: Loaded seccomp filter (level 2) Dec 5 02:48:00 localhost systemd[1]: Started NTP client/server. Dec 5 02:48:01 localhost python3[25621]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:48:01 localhost python3[25664]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764920880.7895174-56193-249344681230490/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:48:02 localhost python3[25694]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 02:48:02 localhost systemd[1]: Reloading. Dec 5 02:48:02 localhost systemd-sysv-generator[25723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:48:02 localhost systemd-rc-local-generator[25719]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:48:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:48:02 localhost systemd[1]: Reloading. Dec 5 02:48:02 localhost systemd-rc-local-generator[25756]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:48:02 localhost systemd-sysv-generator[25761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:48:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:48:02 localhost systemd[1]: Starting chronyd online sources service... Dec 5 02:48:02 localhost chronyc[25771]: 200 OK Dec 5 02:48:02 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 5 02:48:02 localhost systemd[1]: Finished chronyd online sources service. Dec 5 02:48:03 localhost python3[25787]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:48:03 localhost chronyd[25572]: System clock was stepped by 0.000000 seconds Dec 5 02:48:03 localhost python3[25804]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:48:04 localhost chronyd[25572]: Selected source 206.108.0.132 (pool.ntp.org) Dec 5 02:48:14 localhost python3[25821]: ansible-timezone Invoked with name=UTC hwclock=None Dec 5 02:48:14 localhost systemd[1]: Starting Time & Date Service... Dec 5 02:48:14 localhost systemd[1]: Started Time & Date Service. Dec 5 02:48:15 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 5 02:48:15 localhost python3[25843]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 02:48:16 localhost chronyd[25572]: chronyd exiting Dec 5 02:48:16 localhost systemd[1]: Stopping NTP client/server... Dec 5 02:48:16 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 5 02:48:16 localhost systemd[1]: Stopped NTP client/server. Dec 5 02:48:16 localhost systemd[1]: Starting NTP client/server... Dec 5 02:48:16 localhost chronyd[25851]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 5 02:48:16 localhost chronyd[25851]: Frequency -30.154 +/- 0.172 ppm read from /var/lib/chrony/drift Dec 5 02:48:16 localhost chronyd[25851]: Loaded seccomp filter (level 2) Dec 5 02:48:16 localhost systemd[1]: Started NTP client/server. Dec 5 02:48:20 localhost chronyd[25851]: Selected source 206.108.0.132 (pool.ntp.org) Dec 5 02:48:44 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 5 02:50:13 localhost sshd[26048]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:13 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 5 02:50:13 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 5 02:50:13 localhost systemd-logind[760]: New session 14 of user ceph-admin. Dec 5 02:50:13 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 5 02:50:13 localhost systemd[1]: Starting User Manager for UID 1002... Dec 5 02:50:13 localhost systemd[26052]: Queued start job for default target Main User Target. Dec 5 02:50:13 localhost systemd[26052]: Created slice User Application Slice. Dec 5 02:50:13 localhost systemd[26052]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 02:50:13 localhost systemd[26052]: Started Daily Cleanup of User's Temporary Directories. Dec 5 02:50:13 localhost systemd[26052]: Reached target Paths. Dec 5 02:50:13 localhost systemd[26052]: Reached target Timers. Dec 5 02:50:13 localhost systemd[26052]: Starting D-Bus User Message Bus Socket... Dec 5 02:50:13 localhost systemd[26052]: Starting Create User's Volatile Files and Directories... Dec 5 02:50:13 localhost systemd[26052]: Listening on D-Bus User Message Bus Socket. Dec 5 02:50:13 localhost systemd[26052]: Reached target Sockets. Dec 5 02:50:13 localhost systemd[26052]: Finished Create User's Volatile Files and Directories. Dec 5 02:50:13 localhost systemd[26052]: Reached target Basic System. Dec 5 02:50:13 localhost systemd[26052]: Reached target Main User Target. Dec 5 02:50:13 localhost systemd[26052]: Startup finished in 91ms. Dec 5 02:50:13 localhost systemd[1]: Started User Manager for UID 1002. Dec 5 02:50:13 localhost systemd[1]: Started Session 14 of User ceph-admin. Dec 5 02:50:13 localhost sshd[26067]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:13 localhost systemd-logind[760]: New session 16 of user ceph-admin. Dec 5 02:50:13 localhost systemd[1]: Started Session 16 of User ceph-admin. Dec 5 02:50:13 localhost sshd[26087]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:13 localhost systemd-logind[760]: New session 17 of user ceph-admin. Dec 5 02:50:13 localhost systemd[1]: Started Session 17 of User ceph-admin. Dec 5 02:50:14 localhost sshd[26106]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:14 localhost systemd-logind[760]: New session 18 of user ceph-admin. Dec 5 02:50:14 localhost systemd[1]: Started Session 18 of User ceph-admin. Dec 5 02:50:14 localhost sshd[26125]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:14 localhost systemd-logind[760]: New session 19 of user ceph-admin. Dec 5 02:50:14 localhost systemd[1]: Started Session 19 of User ceph-admin. Dec 5 02:50:15 localhost sshd[26144]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:15 localhost systemd-logind[760]: New session 20 of user ceph-admin. Dec 5 02:50:15 localhost systemd[1]: Started Session 20 of User ceph-admin. Dec 5 02:50:15 localhost sshd[26163]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:15 localhost systemd-logind[760]: New session 21 of user ceph-admin. Dec 5 02:50:15 localhost systemd[1]: Started Session 21 of User ceph-admin. Dec 5 02:50:15 localhost sshd[26182]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:15 localhost systemd-logind[760]: New session 22 of user ceph-admin. Dec 5 02:50:15 localhost systemd[1]: Started Session 22 of User ceph-admin. Dec 5 02:50:16 localhost sshd[26201]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:16 localhost systemd-logind[760]: New session 23 of user ceph-admin. Dec 5 02:50:16 localhost systemd[1]: Started Session 23 of User ceph-admin. Dec 5 02:50:16 localhost sshd[26220]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:16 localhost systemd-logind[760]: New session 24 of user ceph-admin. Dec 5 02:50:16 localhost systemd[1]: Started Session 24 of User ceph-admin. Dec 5 02:50:17 localhost sshd[26237]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:17 localhost systemd-logind[760]: New session 25 of user ceph-admin. Dec 5 02:50:17 localhost systemd[1]: Started Session 25 of User ceph-admin. Dec 5 02:50:17 localhost sshd[26256]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:50:17 localhost systemd-logind[760]: New session 26 of user ceph-admin. Dec 5 02:50:17 localhost systemd[1]: Started Session 26 of User ceph-admin. Dec 5 02:50:18 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:44 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:44 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:45 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:45 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:45 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26473 (sysctl) Dec 5 02:50:45 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Dec 5 02:50:45 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Dec 5 02:50:46 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:46 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:50:49 localhost kernel: VFS: idmapped mount is not enabled. Dec 5 02:51:09 localhost podman[26611]: Dec 5 02:51:09 localhost podman[26611]: 2025-12-05 07:51:09.400553638 +0000 UTC m=+22.701892581 container create 45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_williams, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph) Dec 5 02:51:09 localhost podman[26611]: 2025-12-05 07:50:46.739327863 +0000 UTC m=+0.040666836 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:09 localhost systemd[1]: Created slice Slice /machine. Dec 5 02:51:09 localhost systemd[1]: Started libpod-conmon-45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c.scope. Dec 5 02:51:09 localhost systemd[1]: Started libcrun container. Dec 5 02:51:09 localhost podman[26611]: 2025-12-05 07:51:09.476368612 +0000 UTC m=+22.777707545 container init 45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_williams, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 02:51:09 localhost podman[26611]: 2025-12-05 07:51:09.486936054 +0000 UTC m=+22.788274987 container start 45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_williams, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4) Dec 5 02:51:09 localhost podman[26611]: 2025-12-05 07:51:09.487113149 +0000 UTC m=+22.788452082 container attach 45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_williams, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main) Dec 5 02:51:09 localhost cranky_williams[26724]: 167 167 Dec 5 02:51:09 localhost systemd[1]: libpod-45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c.scope: Deactivated successfully. Dec 5 02:51:09 localhost podman[26611]: 2025-12-05 07:51:09.489631561 +0000 UTC m=+22.790970524 container died 45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_williams, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.component=rhceph-container) Dec 5 02:51:09 localhost podman[26729]: 2025-12-05 07:51:09.545705496 +0000 UTC m=+0.046861397 container remove 45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_williams, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Dec 5 02:51:09 localhost systemd[1]: libpod-conmon-45eb0f7c6e825393281de556f2a1148b7153f96da9afe596ec9ff1ab12950d6c.scope: Deactivated successfully. Dec 5 02:51:09 localhost podman[26751]: Dec 5 02:51:09 localhost podman[26751]: 2025-12-05 07:51:09.767016938 +0000 UTC m=+0.053665688 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:10 localhost systemd[1]: var-lib-containers-storage-overlay-a160da8b18ed4fdbc3a1ef89402fd571ce6138fa7ae00981f2e50409ca8f2310-merged.mount: Deactivated successfully. Dec 5 02:51:13 localhost podman[26751]: 2025-12-05 07:51:13.680098773 +0000 UTC m=+3.966747473 container create ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_germain, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, name=rhceph, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 5 02:51:13 localhost systemd[1]: Started libpod-conmon-ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a.scope. Dec 5 02:51:13 localhost systemd[1]: Started libcrun container. Dec 5 02:51:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484b976148231c5538654a724333570939afc7fa5510b3125210d69cb0b85b42/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484b976148231c5538654a724333570939afc7fa5510b3125210d69cb0b85b42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:13 localhost podman[26751]: 2025-12-05 07:51:13.77548419 +0000 UTC m=+4.062132900 container init ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_germain, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True) Dec 5 02:51:13 localhost podman[26751]: 2025-12-05 07:51:13.788803001 +0000 UTC m=+4.075451701 container start ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_germain, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git) Dec 5 02:51:13 localhost podman[26751]: 2025-12-05 07:51:13.78906437 +0000 UTC m=+4.075713120 container attach ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_germain, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, version=7, io.buildah.version=1.41.4) Dec 5 02:51:14 localhost optimistic_germain[26872]: [ Dec 5 02:51:14 localhost optimistic_germain[26872]: { Dec 5 02:51:14 localhost optimistic_germain[26872]: "available": false, Dec 5 02:51:14 localhost optimistic_germain[26872]: "ceph_device": false, Dec 5 02:51:14 localhost optimistic_germain[26872]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 02:51:14 localhost optimistic_germain[26872]: "lsm_data": {}, Dec 5 02:51:14 localhost optimistic_germain[26872]: "lvs": [], Dec 5 02:51:14 localhost optimistic_germain[26872]: "path": "/dev/sr0", Dec 5 02:51:14 localhost optimistic_germain[26872]: "rejected_reasons": [ Dec 5 02:51:14 localhost optimistic_germain[26872]: "Insufficient space (<5GB)", Dec 5 02:51:14 localhost optimistic_germain[26872]: "Has a FileSystem" Dec 5 02:51:14 localhost optimistic_germain[26872]: ], Dec 5 02:51:14 localhost optimistic_germain[26872]: "sys_api": { Dec 5 02:51:14 localhost optimistic_germain[26872]: "actuators": null, Dec 5 02:51:14 localhost optimistic_germain[26872]: "device_nodes": "sr0", Dec 5 02:51:14 localhost optimistic_germain[26872]: "human_readable_size": "482.00 KB", Dec 5 02:51:14 localhost optimistic_germain[26872]: "id_bus": "ata", Dec 5 02:51:14 localhost optimistic_germain[26872]: "model": "QEMU DVD-ROM", Dec 5 02:51:14 localhost optimistic_germain[26872]: "nr_requests": "2", Dec 5 02:51:14 localhost optimistic_germain[26872]: "partitions": {}, Dec 5 02:51:14 localhost optimistic_germain[26872]: "path": "/dev/sr0", Dec 5 02:51:14 localhost optimistic_germain[26872]: "removable": "1", Dec 5 02:51:14 localhost optimistic_germain[26872]: "rev": "2.5+", Dec 5 02:51:14 localhost optimistic_germain[26872]: "ro": "0", Dec 5 02:51:14 localhost optimistic_germain[26872]: "rotational": "1", Dec 5 02:51:14 localhost optimistic_germain[26872]: "sas_address": "", Dec 5 02:51:14 localhost optimistic_germain[26872]: "sas_device_handle": "", Dec 5 02:51:14 localhost optimistic_germain[26872]: "scheduler_mode": "mq-deadline", Dec 5 02:51:14 localhost optimistic_germain[26872]: "sectors": 0, Dec 5 02:51:14 localhost optimistic_germain[26872]: "sectorsize": "2048", Dec 5 02:51:14 localhost optimistic_germain[26872]: "size": 493568.0, Dec 5 02:51:14 localhost optimistic_germain[26872]: "support_discard": "0", Dec 5 02:51:14 localhost optimistic_germain[26872]: "type": "disk", Dec 5 02:51:14 localhost optimistic_germain[26872]: "vendor": "QEMU" Dec 5 02:51:14 localhost optimistic_germain[26872]: } Dec 5 02:51:14 localhost optimistic_germain[26872]: } Dec 5 02:51:14 localhost optimistic_germain[26872]: ] Dec 5 02:51:14 localhost systemd[1]: libpod-ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a.scope: Deactivated successfully. Dec 5 02:51:14 localhost podman[26751]: 2025-12-05 07:51:14.598820525 +0000 UTC m=+4.885469225 container died ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_germain, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 02:51:14 localhost systemd[1]: var-lib-containers-storage-overlay-484b976148231c5538654a724333570939afc7fa5510b3125210d69cb0b85b42-merged.mount: Deactivated successfully. Dec 5 02:51:14 localhost podman[28177]: 2025-12-05 07:51:14.676731807 +0000 UTC m=+0.068974624 container remove ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_germain, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=) Dec 5 02:51:14 localhost systemd[1]: libpod-conmon-ecc4936f93e86b721199ac911f18868b27108da4cdba4882eee648d7c62d316a.scope: Deactivated successfully. Dec 5 02:51:14 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:15 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:15 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Dec 5 02:51:15 localhost systemd[1]: Closed Process Core Dump Socket. Dec 5 02:51:15 localhost systemd[1]: Stopping Process Core Dump Socket... Dec 5 02:51:15 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 5 02:51:15 localhost systemd[1]: Reloading. Dec 5 02:51:15 localhost systemd-rc-local-generator[28265]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:15 localhost systemd-sysv-generator[28268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:15 localhost systemd[1]: Reloading. Dec 5 02:51:15 localhost systemd-sysv-generator[28305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:15 localhost systemd-rc-local-generator[28299]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:38 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:38 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:39 localhost podman[28381]: Dec 5 02:51:39 localhost podman[28381]: 2025-12-05 07:51:39.052684784 +0000 UTC m=+0.071312749 container create f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_cannon, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 02:51:39 localhost systemd[1]: Started libpod-conmon-f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f.scope. Dec 5 02:51:39 localhost systemd[1]: Started libcrun container. Dec 5 02:51:39 localhost podman[28381]: 2025-12-05 07:51:39.023973897 +0000 UTC m=+0.042601862 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:39 localhost podman[28381]: 2025-12-05 07:51:39.125419787 +0000 UTC m=+0.144047762 container init f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_cannon, build-date=2025-11-26T19:44:28Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, version=7, release=1763362218) Dec 5 02:51:39 localhost podman[28381]: 2025-12-05 07:51:39.132912206 +0000 UTC m=+0.151540201 container start f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_cannon, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7) Dec 5 02:51:39 localhost podman[28381]: 2025-12-05 07:51:39.133196365 +0000 UTC m=+0.151824340 container attach f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_cannon, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7) Dec 5 02:51:39 localhost friendly_cannon[28397]: 167 167 Dec 5 02:51:39 localhost systemd[1]: libpod-f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f.scope: Deactivated successfully. Dec 5 02:51:39 localhost podman[28381]: 2025-12-05 07:51:39.138543128 +0000 UTC m=+0.157171123 container died f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_cannon, release=1763362218, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph) Dec 5 02:51:39 localhost podman[28402]: 2025-12-05 07:51:39.222456113 +0000 UTC m=+0.074338963 container remove f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_cannon, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Dec 5 02:51:39 localhost systemd[1]: libpod-conmon-f9a3ec7b1c048b1cc04272117fb0471d35d3c64e1540cec66093317b36d9058f.scope: Deactivated successfully. Dec 5 02:51:39 localhost systemd[1]: Reloading. Dec 5 02:51:39 localhost systemd-rc-local-generator[28441]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:39 localhost systemd-sysv-generator[28446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:39 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:39 localhost systemd[1]: Reloading. Dec 5 02:51:39 localhost systemd-rc-local-generator[28479]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:39 localhost systemd-sysv-generator[28484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:39 localhost systemd[1]: Reached target All Ceph clusters and services. Dec 5 02:51:39 localhost systemd[1]: Reloading. Dec 5 02:51:39 localhost systemd-rc-local-generator[28518]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:39 localhost systemd-sysv-generator[28521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:40 localhost systemd[1]: Reached target Ceph cluster 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 02:51:40 localhost systemd[1]: Reloading. Dec 5 02:51:40 localhost systemd-sysv-generator[28562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:40 localhost systemd-rc-local-generator[28558]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:40 localhost systemd[1]: Reloading. Dec 5 02:51:40 localhost systemd-sysv-generator[28603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:51:40 localhost systemd-rc-local-generator[28597]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:51:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:51:40 localhost systemd[1]: Created slice Slice /system/ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 02:51:40 localhost systemd[1]: Reached target System Time Set. Dec 5 02:51:40 localhost systemd[1]: Reached target System Time Synchronized. Dec 5 02:51:40 localhost systemd[1]: Starting Ceph crash.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 02:51:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 5 02:51:40 localhost podman[28661]: Dec 5 02:51:40 localhost podman[28661]: 2025-12-05 07:51:40.887508905 +0000 UTC m=+0.104264147 container create fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, io.buildah.version=1.41.4, ceph=True, vcs-type=git, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 02:51:40 localhost podman[28661]: 2025-12-05 07:51:40.82648562 +0000 UTC m=+0.043240832 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf3b51d1d8c352fb10a362eaff5f38ea47dc3f21e53a81946e2544d970e8d2/merged/etc/ceph/ceph.client.crash.np0005546419.keyring supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf3b51d1d8c352fb10a362eaff5f38ea47dc3f21e53a81946e2544d970e8d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cf3b51d1d8c352fb10a362eaff5f38ea47dc3f21e53a81946e2544d970e8d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:40 localhost podman[28661]: 2025-12-05 07:51:40.977580348 +0000 UTC m=+0.194335590 container init fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph) Dec 5 02:51:40 localhost podman[28661]: 2025-12-05 07:51:40.989131371 +0000 UTC m=+0.205886603 container start fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vcs-type=git, RELEASE=main) Dec 5 02:51:40 localhost bash[28661]: fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e Dec 5 02:51:40 localhost systemd[1]: Started Ceph crash.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: INFO:ceph-crash:pinging cluster to exercise our key Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.166+0000 7f11c1994640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.166+0000 7f11c1994640 -1 AuthRegistry(0x7f11bc0680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.167+0000 7f11c1994640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.167+0000 7f11c1994640 -1 AuthRegistry(0x7f11c1993000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.174+0000 7f11baffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.174+0000 7f11ba7fc640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.180+0000 7f11bb7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: 2025-12-05T07:51:41.180+0000 7f11c1994640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: [errno 13] RADOS permission denied (error connecting to the cluster) Dec 5 02:51:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419[28674]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Dec 5 02:51:41 localhost systemd[1]: tmp-crun.DbHuzK.mount: Deactivated successfully. Dec 5 02:51:49 localhost podman[28760]: Dec 5 02:51:49 localhost podman[28760]: 2025-12-05 07:51:49.821951555 +0000 UTC m=+0.074946650 container create 3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_snyder, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 5 02:51:49 localhost systemd[1]: Started libpod-conmon-3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea.scope. Dec 5 02:51:49 localhost systemd[1]: Started libcrun container. Dec 5 02:51:49 localhost podman[28760]: 2025-12-05 07:51:49.791210486 +0000 UTC m=+0.044205571 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:49 localhost podman[28760]: 2025-12-05 07:51:49.892985117 +0000 UTC m=+0.145980212 container init 3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_snyder, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 02:51:49 localhost systemd[1]: tmp-crun.ViyobE.mount: Deactivated successfully. Dec 5 02:51:49 localhost podman[28760]: 2025-12-05 07:51:49.906264843 +0000 UTC m=+0.159259978 container start 3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_snyder, com.redhat.component=rhceph-container, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 02:51:49 localhost podman[28760]: 2025-12-05 07:51:49.90652279 +0000 UTC m=+0.159517875 container attach 3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_snyder, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7) Dec 5 02:51:49 localhost great_snyder[28775]: 167 167 Dec 5 02:51:49 localhost systemd[1]: libpod-3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea.scope: Deactivated successfully. Dec 5 02:51:49 localhost podman[28760]: 2025-12-05 07:51:49.910313926 +0000 UTC m=+0.163309061 container died 3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_snyder, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main) Dec 5 02:51:49 localhost podman[28780]: 2025-12-05 07:51:49.999160981 +0000 UTC m=+0.078969094 container remove 3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_snyder, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True) Dec 5 02:51:50 localhost systemd[1]: libpod-conmon-3dbeddfda49337f732929d46387b412016000283c00cb482f5e3181ae89194ea.scope: Deactivated successfully. Dec 5 02:51:50 localhost podman[28800]: Dec 5 02:51:50 localhost podman[28800]: 2025-12-05 07:51:50.194286444 +0000 UTC m=+0.062995226 container create 1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_stonebraker, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main) Dec 5 02:51:50 localhost systemd[1]: Started libpod-conmon-1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4.scope. Dec 5 02:51:50 localhost systemd[1]: Started libcrun container. Dec 5 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016dce934e8679c28407782acaf345daf916420d648a9892ea01b4ae48d9deb5/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016dce934e8679c28407782acaf345daf916420d648a9892ea01b4ae48d9deb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:50 localhost podman[28800]: 2025-12-05 07:51:50.164299208 +0000 UTC m=+0.033008060 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016dce934e8679c28407782acaf345daf916420d648a9892ea01b4ae48d9deb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016dce934e8679c28407782acaf345daf916420d648a9892ea01b4ae48d9deb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/016dce934e8679c28407782acaf345daf916420d648a9892ea01b4ae48d9deb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:50 localhost podman[28800]: 2025-12-05 07:51:50.288472192 +0000 UTC m=+0.157181014 container init 1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_stonebraker, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, release=1763362218, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Dec 5 02:51:50 localhost podman[28800]: 2025-12-05 07:51:50.296039044 +0000 UTC m=+0.164747826 container start 1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_stonebraker, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Dec 5 02:51:50 localhost podman[28800]: 2025-12-05 07:51:50.29624215 +0000 UTC m=+0.164950932 container attach 1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_stonebraker, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, ceph=True, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7) Dec 5 02:51:50 localhost tender_stonebraker[28815]: --> passed data devices: 0 physical, 2 LVM Dec 5 02:51:50 localhost tender_stonebraker[28815]: --> relative data size: 1.0 Dec 5 02:51:50 localhost systemd[1]: tmp-crun.n9w7DT.mount: Deactivated successfully. Dec 5 02:51:50 localhost systemd[1]: var-lib-containers-storage-overlay-96c9fec0f763ab058b858e7c5d1828c5397d2c4c9514219cd8ea2df5b413e9dd-merged.mount: Deactivated successfully. Dec 5 02:51:50 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 5 02:51:50 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5bd712dc-95e5-4c48-be82-844b93762be9 Dec 5 02:51:51 localhost lvm[28869]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 5 02:51:51 localhost lvm[28869]: VG ceph_vg0 finished Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap Dec 5 02:51:51 localhost tender_stonebraker[28815]: stderr: got monmap epoch 3 Dec 5 02:51:51 localhost tender_stonebraker[28815]: --> Creating keyring file for osd.0 Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Dec 5 02:51:51 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 5bd712dc-95e5-4c48-be82-844b93762be9 --setuser ceph --setgroup ceph Dec 5 02:51:54 localhost tender_stonebraker[28815]: stderr: 2025-12-05T07:51:51.948+0000 7f75d4df2a80 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 5 02:51:54 localhost tender_stonebraker[28815]: stderr: 2025-12-05T07:51:51.948+0000 7f75d4df2a80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid Dec 5 02:51:54 localhost tender_stonebraker[28815]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 5 02:51:54 localhost tender_stonebraker[28815]: --> ceph-volume lvm activate successful for osd ID: 0 Dec 5 02:51:54 localhost tender_stonebraker[28815]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 19ea0b05-8172-42a5-84ef-029f92f127e6 Dec 5 02:51:54 localhost lvm[29805]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 5 02:51:54 localhost lvm[29805]: VG ceph_vg1 finished Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 5 02:51:54 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap Dec 5 02:51:55 localhost tender_stonebraker[28815]: stderr: got monmap epoch 3 Dec 5 02:51:55 localhost tender_stonebraker[28815]: --> Creating keyring file for osd.3 Dec 5 02:51:55 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring Dec 5 02:51:55 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/ Dec 5 02:51:55 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 19ea0b05-8172-42a5-84ef-029f92f127e6 --setuser ceph --setgroup ceph Dec 5 02:51:57 localhost tender_stonebraker[28815]: stderr: 2025-12-05T07:51:55.472+0000 7fc1c1c50a80 -1 bluestore(/var/lib/ceph/osd/ceph-3//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 5 02:51:57 localhost tender_stonebraker[28815]: stderr: 2025-12-05T07:51:55.472+0000 7fc1c1c50a80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid Dec 5 02:51:57 localhost tender_stonebraker[28815]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Dec 5 02:51:57 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 5 02:51:57 localhost tender_stonebraker[28815]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-3 --no-mon-config Dec 5 02:51:58 localhost tender_stonebraker[28815]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 5 02:51:58 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block Dec 5 02:51:58 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 5 02:51:58 localhost tender_stonebraker[28815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 5 02:51:58 localhost tender_stonebraker[28815]: --> ceph-volume lvm activate successful for osd ID: 3 Dec 5 02:51:58 localhost tender_stonebraker[28815]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Dec 5 02:51:58 localhost systemd[1]: libpod-1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4.scope: Deactivated successfully. Dec 5 02:51:58 localhost systemd[1]: libpod-1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4.scope: Consumed 3.639s CPU time. Dec 5 02:51:58 localhost podman[30709]: 2025-12-05 07:51:58.116561993 +0000 UTC m=+0.041929993 container died 1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_stonebraker, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Dec 5 02:51:58 localhost systemd[1]: tmp-crun.ZWVqQR.mount: Deactivated successfully. Dec 5 02:51:58 localhost systemd[1]: var-lib-containers-storage-overlay-016dce934e8679c28407782acaf345daf916420d648a9892ea01b4ae48d9deb5-merged.mount: Deactivated successfully. Dec 5 02:51:58 localhost podman[30709]: 2025-12-05 07:51:58.174803393 +0000 UTC m=+0.100171353 container remove 1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_stonebraker, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True) Dec 5 02:51:58 localhost systemd[1]: libpod-conmon-1c63039ee05951c19711d0395d0d410da0d7e8a8e5f478783f84970e325767f4.scope: Deactivated successfully. Dec 5 02:51:58 localhost podman[30790]: Dec 5 02:51:58 localhost podman[30790]: 2025-12-05 07:51:58.904332767 +0000 UTC m=+0.073979462 container create 1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_curie, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Dec 5 02:51:58 localhost systemd[1]: Started libpod-conmon-1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc.scope. Dec 5 02:51:58 localhost systemd[1]: Started libcrun container. Dec 5 02:51:58 localhost podman[30790]: 2025-12-05 07:51:58.972388677 +0000 UTC m=+0.142035302 container init 1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_curie, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Dec 5 02:51:58 localhost podman[30790]: 2025-12-05 07:51:58.877385463 +0000 UTC m=+0.047032128 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:58 localhost podman[30790]: 2025-12-05 07:51:58.981777263 +0000 UTC m=+0.151423918 container start 1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_curie, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, version=7, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 02:51:58 localhost podman[30790]: 2025-12-05 07:51:58.982018741 +0000 UTC m=+0.151665366 container attach 1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_curie, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4) Dec 5 02:51:58 localhost festive_curie[30805]: 167 167 Dec 5 02:51:58 localhost systemd[1]: libpod-1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc.scope: Deactivated successfully. Dec 5 02:51:58 localhost podman[30790]: 2025-12-05 07:51:58.985123945 +0000 UTC m=+0.154770580 container died 1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_curie, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True) Dec 5 02:51:59 localhost podman[30810]: 2025-12-05 07:51:59.051316189 +0000 UTC m=+0.055804937 container remove 1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=festive_curie, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Dec 5 02:51:59 localhost systemd[1]: libpod-conmon-1f31607d197bba00fd13afaf07a9459458986403b87f4e4a940c14618f168afc.scope: Deactivated successfully. Dec 5 02:51:59 localhost systemd[1]: var-lib-containers-storage-overlay-f9fb26ea440cba85673c5ecf568f57305c73cf4e2cb785f6aaf963085faea742-merged.mount: Deactivated successfully. Dec 5 02:51:59 localhost podman[30829]: Dec 5 02:51:59 localhost podman[30829]: 2025-12-05 07:51:59.236334023 +0000 UTC m=+0.053420014 container create e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_cohen, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True) Dec 5 02:51:59 localhost systemd[1]: Started libpod-conmon-e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec.scope. Dec 5 02:51:59 localhost systemd[1]: Started libcrun container. Dec 5 02:51:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcbe29d12d72d84b50d5d07467f2bd71f2c447cefe22fd2b8df03081dbaf76a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:59 localhost podman[30829]: 2025-12-05 07:51:59.211932977 +0000 UTC m=+0.029019008 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:51:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcbe29d12d72d84b50d5d07467f2bd71f2c447cefe22fd2b8df03081dbaf76a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcbe29d12d72d84b50d5d07467f2bd71f2c447cefe22fd2b8df03081dbaf76a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:51:59 localhost podman[30829]: 2025-12-05 07:51:59.333590415 +0000 UTC m=+0.150676426 container init e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_cohen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, release=1763362218) Dec 5 02:51:59 localhost podman[30829]: 2025-12-05 07:51:59.343536538 +0000 UTC m=+0.160622539 container start e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_cohen, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 5 02:51:59 localhost podman[30829]: 2025-12-05 07:51:59.343793966 +0000 UTC m=+0.160879947 container attach e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_cohen, ceph=True, GIT_BRANCH=main, release=1763362218, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 02:51:59 localhost distracted_cohen[30844]: { Dec 5 02:51:59 localhost distracted_cohen[30844]: "0": [ Dec 5 02:51:59 localhost distracted_cohen[30844]: { Dec 5 02:51:59 localhost distracted_cohen[30844]: "devices": [ Dec 5 02:51:59 localhost distracted_cohen[30844]: "/dev/loop3" Dec 5 02:51:59 localhost distracted_cohen[30844]: ], Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_name": "ceph_lv0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_size": "7511998464", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=wCcrU9-3QME-mb1s-h5wE-FIVw-0xgG-JJxB8U,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=79feddb1-4bfc-557f-83b9-0d57c9f66c1b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5bd712dc-95e5-4c48-be82-844b93762be9,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_uuid": "wCcrU9-3QME-mb1s-h5wE-FIVw-0xgG-JJxB8U", Dec 5 02:51:59 localhost distracted_cohen[30844]: "name": "ceph_lv0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "path": "/dev/ceph_vg0/ceph_lv0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "tags": { Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.block_uuid": "wCcrU9-3QME-mb1s-h5wE-FIVw-0xgG-JJxB8U", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.cephx_lockbox_secret": "", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.cluster_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.cluster_name": "ceph", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.crush_device_class": "", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.encrypted": "0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.osd_fsid": "5bd712dc-95e5-4c48-be82-844b93762be9", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.osd_id": "0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.osdspec_affinity": "default_drive_group", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.type": "block", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.vdo": "0" Dec 5 02:51:59 localhost distracted_cohen[30844]: }, Dec 5 02:51:59 localhost distracted_cohen[30844]: "type": "block", Dec 5 02:51:59 localhost distracted_cohen[30844]: "vg_name": "ceph_vg0" Dec 5 02:51:59 localhost distracted_cohen[30844]: } Dec 5 02:51:59 localhost distracted_cohen[30844]: ], Dec 5 02:51:59 localhost distracted_cohen[30844]: "3": [ Dec 5 02:51:59 localhost distracted_cohen[30844]: { Dec 5 02:51:59 localhost distracted_cohen[30844]: "devices": [ Dec 5 02:51:59 localhost distracted_cohen[30844]: "/dev/loop4" Dec 5 02:51:59 localhost distracted_cohen[30844]: ], Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_name": "ceph_lv1", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_size": "7511998464", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=I0ZJdX-wgwB-VUpm-kEbY-Jexg-4Pfk-TrdIXh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=79feddb1-4bfc-557f-83b9-0d57c9f66c1b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19ea0b05-8172-42a5-84ef-029f92f127e6,ceph.osd_id=3,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "lv_uuid": "I0ZJdX-wgwB-VUpm-kEbY-Jexg-4Pfk-TrdIXh", Dec 5 02:51:59 localhost distracted_cohen[30844]: "name": "ceph_lv1", Dec 5 02:51:59 localhost distracted_cohen[30844]: "path": "/dev/ceph_vg1/ceph_lv1", Dec 5 02:51:59 localhost distracted_cohen[30844]: "tags": { Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.block_uuid": "I0ZJdX-wgwB-VUpm-kEbY-Jexg-4Pfk-TrdIXh", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.cephx_lockbox_secret": "", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.cluster_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.cluster_name": "ceph", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.crush_device_class": "", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.encrypted": "0", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.osd_fsid": "19ea0b05-8172-42a5-84ef-029f92f127e6", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.osd_id": "3", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.osdspec_affinity": "default_drive_group", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.type": "block", Dec 5 02:51:59 localhost distracted_cohen[30844]: "ceph.vdo": "0" Dec 5 02:51:59 localhost distracted_cohen[30844]: }, Dec 5 02:51:59 localhost distracted_cohen[30844]: "type": "block", Dec 5 02:51:59 localhost distracted_cohen[30844]: "vg_name": "ceph_vg1" Dec 5 02:51:59 localhost distracted_cohen[30844]: } Dec 5 02:51:59 localhost distracted_cohen[30844]: ] Dec 5 02:51:59 localhost distracted_cohen[30844]: } Dec 5 02:51:59 localhost systemd[1]: libpod-e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec.scope: Deactivated successfully. Dec 5 02:51:59 localhost podman[30829]: 2025-12-05 07:51:59.673724779 +0000 UTC m=+0.490810760 container died e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_cohen, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=1763362218, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 02:51:59 localhost podman[30853]: 2025-12-05 07:51:59.758624294 +0000 UTC m=+0.074896630 container remove e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_cohen, name=rhceph, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, release=1763362218, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Dec 5 02:51:59 localhost systemd[1]: libpod-conmon-e120c828908a4d59bc6bbeb3d20060c920b4a48cbfa12b6d4ee3a6f712aa41ec.scope: Deactivated successfully. Dec 5 02:52:00 localhost systemd[1]: var-lib-containers-storage-overlay-fdcbe29d12d72d84b50d5d07467f2bd71f2c447cefe22fd2b8df03081dbaf76a-merged.mount: Deactivated successfully. Dec 5 02:52:00 localhost podman[30939]: Dec 5 02:52:00 localhost podman[30939]: 2025-12-05 07:52:00.46892985 +0000 UTC m=+0.057501348 container create 87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 02:52:00 localhost systemd[1]: Started libpod-conmon-87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f.scope. Dec 5 02:52:00 localhost systemd[1]: Started libcrun container. Dec 5 02:52:00 localhost podman[30939]: 2025-12-05 07:52:00.529232553 +0000 UTC m=+0.117804051 container init 87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 02:52:00 localhost podman[30939]: 2025-12-05 07:52:00.538766624 +0000 UTC m=+0.127338122 container start 87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=) Dec 5 02:52:00 localhost brave_lewin[30954]: 167 167 Dec 5 02:52:00 localhost podman[30939]: 2025-12-05 07:52:00.539115705 +0000 UTC m=+0.127687203 container attach 87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container) Dec 5 02:52:00 localhost systemd[1]: libpod-87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f.scope: Deactivated successfully. Dec 5 02:52:00 localhost podman[30939]: 2025-12-05 07:52:00.442824742 +0000 UTC m=+0.031396240 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:00 localhost podman[30939]: 2025-12-05 07:52:00.542370334 +0000 UTC m=+0.130941832 container died 87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Dec 5 02:52:00 localhost podman[30959]: 2025-12-05 07:52:00.627133825 +0000 UTC m=+0.077110238 container remove 87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_lewin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64) Dec 5 02:52:00 localhost systemd[1]: libpod-conmon-87923a97d8616a9ceeab2888c5e92be0c5ca3a3855d9ba69e3fbec202d04b50f.scope: Deactivated successfully. Dec 5 02:52:00 localhost podman[30987]: Dec 5 02:52:00 localhost podman[30987]: 2025-12-05 07:52:00.944644367 +0000 UTC m=+0.072933219 container create bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=) Dec 5 02:52:00 localhost systemd[1]: Started libpod-conmon-bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f.scope. Dec 5 02:52:01 localhost systemd[1]: Started libcrun container. Dec 5 02:52:01 localhost podman[30987]: 2025-12-05 07:52:00.91463874 +0000 UTC m=+0.042927592 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f00838d625b8cd19d793ff2db93160b0cf874f23a54cfb5c7554f61c18644d7/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f00838d625b8cd19d793ff2db93160b0cf874f23a54cfb5c7554f61c18644d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f00838d625b8cd19d793ff2db93160b0cf874f23a54cfb5c7554f61c18644d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f00838d625b8cd19d793ff2db93160b0cf874f23a54cfb5c7554f61c18644d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f00838d625b8cd19d793ff2db93160b0cf874f23a54cfb5c7554f61c18644d7/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:01 localhost podman[30987]: 2025-12-05 07:52:01.070849124 +0000 UTC m=+0.199137986 container init bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4) Dec 5 02:52:01 localhost podman[30987]: 2025-12-05 07:52:01.080464898 +0000 UTC m=+0.208753750 container start bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc.) Dec 5 02:52:01 localhost podman[30987]: 2025-12-05 07:52:01.080754116 +0000 UTC m=+0.209043018 container attach bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test, com.redhat.component=rhceph-container, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, version=7, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 02:52:01 localhost systemd[1]: tmp-crun.0nnX82.mount: Deactivated successfully. Dec 5 02:52:01 localhost systemd[1]: var-lib-containers-storage-overlay-1c603a41398b2ebba7278a1cd3d34d0e76ee7d58e47beeb2101a2fbe7da06a3a-merged.mount: Deactivated successfully. Dec 5 02:52:01 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test[31002]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 5 02:52:01 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test[31002]: [--no-systemd] [--no-tmpfs] Dec 5 02:52:01 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test[31002]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 5 02:52:01 localhost systemd[1]: libpod-bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f.scope: Deactivated successfully. Dec 5 02:52:01 localhost podman[30987]: 2025-12-05 07:52:01.34363078 +0000 UTC m=+0.471919592 container died bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-type=git, release=1763362218) Dec 5 02:52:01 localhost systemd[1]: var-lib-containers-storage-overlay-0f00838d625b8cd19d793ff2db93160b0cf874f23a54cfb5c7554f61c18644d7-merged.mount: Deactivated successfully. Dec 5 02:52:01 localhost systemd-journald[618]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Dec 5 02:52:01 localhost systemd-journald[618]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 02:52:01 localhost podman[31007]: 2025-12-05 07:52:01.429839655 +0000 UTC m=+0.079694496 container remove bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate-test, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 02:52:01 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 02:52:01 localhost systemd[1]: libpod-conmon-bbe12ab84adabe97250aa234d6216959ee6ac3733e9438194be443a6e9ecc62f.scope: Deactivated successfully. Dec 5 02:52:01 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 02:52:01 localhost systemd[1]: Reloading. Dec 5 02:52:01 localhost systemd-rc-local-generator[31059]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:52:01 localhost systemd-sysv-generator[31064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:52:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:52:01 localhost systemd[1]: Reloading. Dec 5 02:52:02 localhost systemd-rc-local-generator[31106]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:52:02 localhost systemd-sysv-generator[31110]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:52:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:52:02 localhost systemd[1]: Starting Ceph osd.0 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 02:52:02 localhost podman[31166]: Dec 5 02:52:02 localhost podman[31166]: 2025-12-05 07:52:02.472295331 +0000 UTC m=+0.052258347 container create 43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True) Dec 5 02:52:02 localhost systemd[1]: Started libcrun container. Dec 5 02:52:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d705cc0e2f8b4e12098e37b3750c3fa3cb7e08f637a92e32f7a9720023455/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d705cc0e2f8b4e12098e37b3750c3fa3cb7e08f637a92e32f7a9720023455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d705cc0e2f8b4e12098e37b3750c3fa3cb7e08f637a92e32f7a9720023455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d705cc0e2f8b4e12098e37b3750c3fa3cb7e08f637a92e32f7a9720023455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:02 localhost podman[31166]: 2025-12-05 07:52:02.45261592 +0000 UTC m=+0.032578946 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d705cc0e2f8b4e12098e37b3750c3fa3cb7e08f637a92e32f7a9720023455/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:02 localhost podman[31166]: 2025-12-05 07:52:02.570977697 +0000 UTC m=+0.150940743 container init 43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate, io.buildah.version=1.41.4, RELEASE=main, io.openshift.expose-services=, name=rhceph, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 02:52:02 localhost podman[31166]: 2025-12-05 07:52:02.580019784 +0000 UTC m=+0.159982820 container start 43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=) Dec 5 02:52:02 localhost podman[31166]: 2025-12-05 07:52:02.580305222 +0000 UTC m=+0.160268258 container attach 43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 5 02:52:03 localhost bash[31166]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 5 02:52:03 localhost bash[31166]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 5 02:52:03 localhost bash[31166]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 5 02:52:03 localhost bash[31166]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:03 localhost bash[31166]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 5 02:52:03 localhost bash[31166]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 5 02:52:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate[31181]: --> ceph-volume raw activate successful for osd ID: 0 Dec 5 02:52:03 localhost bash[31166]: --> ceph-volume raw activate successful for osd ID: 0 Dec 5 02:52:03 localhost systemd[1]: libpod-43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8.scope: Deactivated successfully. Dec 5 02:52:03 localhost podman[31166]: 2025-12-05 07:52:03.321961516 +0000 UTC m=+0.901924562 container died 43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218) Dec 5 02:52:03 localhost systemd[1]: var-lib-containers-storage-overlay-ac9d705cc0e2f8b4e12098e37b3750c3fa3cb7e08f637a92e32f7a9720023455-merged.mount: Deactivated successfully. Dec 5 02:52:03 localhost podman[31306]: 2025-12-05 07:52:03.405146488 +0000 UTC m=+0.072387213 container remove 43400f260b927a9f3974561d548c95ae6ff30ee3db69e970a2a31ebddd3b84f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0-activate, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z) Dec 5 02:52:03 localhost podman[31367]: Dec 5 02:52:03 localhost podman[31367]: 2025-12-05 07:52:03.713764609 +0000 UTC m=+0.069293577 container create 09ed0c3b16fdca53c9a6a7476962db4616cec3d0ac3766cc4858b95a89b2f9fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=7, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph) Dec 5 02:52:03 localhost systemd[1]: tmp-crun.nlaPne.mount: Deactivated successfully. Dec 5 02:52:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f64c4e0a6fea50d0ad8cdfd5a64693f3238b5cc3e6c52c5e15194aaedcd1e3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:03 localhost podman[31367]: 2025-12-05 07:52:03.686371013 +0000 UTC m=+0.041900021 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f64c4e0a6fea50d0ad8cdfd5a64693f3238b5cc3e6c52c5e15194aaedcd1e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f64c4e0a6fea50d0ad8cdfd5a64693f3238b5cc3e6c52c5e15194aaedcd1e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f64c4e0a6fea50d0ad8cdfd5a64693f3238b5cc3e6c52c5e15194aaedcd1e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23f64c4e0a6fea50d0ad8cdfd5a64693f3238b5cc3e6c52c5e15194aaedcd1e3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:03 localhost podman[31367]: 2025-12-05 07:52:03.820086299 +0000 UTC m=+0.175615267 container init 09ed0c3b16fdca53c9a6a7476962db4616cec3d0ac3766cc4858b95a89b2f9fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, release=1763362218) Dec 5 02:52:03 localhost podman[31367]: 2025-12-05 07:52:03.831114476 +0000 UTC m=+0.186643464 container start 09ed0c3b16fdca53c9a6a7476962db4616cec3d0ac3766cc4858b95a89b2f9fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main) Dec 5 02:52:03 localhost bash[31367]: 09ed0c3b16fdca53c9a6a7476962db4616cec3d0ac3766cc4858b95a89b2f9fe Dec 5 02:52:03 localhost systemd[1]: Started Ceph osd.0 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 02:52:03 localhost ceph-osd[31386]: set uid:gid to 167:167 (ceph:ceph) Dec 5 02:52:03 localhost ceph-osd[31386]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 5 02:52:03 localhost ceph-osd[31386]: pidfile_write: ignore empty --pid-file Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:03 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:03 localhost ceph-osd[31386]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Dec 5 02:52:03 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) close Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) close Dec 5 02:52:04 localhost ceph-osd[31386]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal Dec 5 02:52:04 localhost ceph-osd[31386]: load: jerasure load: lrc Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:04 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) close Dec 5 02:52:04 localhost podman[31478]: Dec 5 02:52:04 localhost podman[31478]: 2025-12-05 07:52:04.606721678 +0000 UTC m=+0.060675696 container create 8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_cray, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 02:52:04 localhost systemd[1]: Started libpod-conmon-8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559.scope. Dec 5 02:52:04 localhost systemd[1]: Started libcrun container. Dec 5 02:52:04 localhost podman[31478]: 2025-12-05 07:52:04.578923628 +0000 UTC m=+0.032877646 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:04 localhost podman[31478]: 2025-12-05 07:52:04.679968536 +0000 UTC m=+0.133922574 container init 8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_cray, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, version=7, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True) Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:04 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) close Dec 5 02:52:04 localhost podman[31478]: 2025-12-05 07:52:04.689166628 +0000 UTC m=+0.143120646 container start 8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_cray, release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 5 02:52:04 localhost podman[31478]: 2025-12-05 07:52:04.689461387 +0000 UTC m=+0.143415375 container attach 8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_cray, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4) Dec 5 02:52:04 localhost funny_cray[31493]: 167 167 Dec 5 02:52:04 localhost systemd[1]: libpod-8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559.scope: Deactivated successfully. Dec 5 02:52:04 localhost podman[31478]: 2025-12-05 07:52:04.693124548 +0000 UTC m=+0.147078576 container died 8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_cray, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=1763362218, name=rhceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7) Dec 5 02:52:04 localhost systemd[1]: var-lib-containers-storage-overlay-eca8b7708a9affb6f74b2154990941d051d55b530edcd37ef242f67258b3ed25-merged.mount: Deactivated successfully. Dec 5 02:52:04 localhost podman[31502]: 2025-12-05 07:52:04.782911922 +0000 UTC m=+0.080608954 container remove 8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_cray, GIT_BRANCH=main, ceph=True, name=rhceph, version=7, maintainer=Guillaume Abrioux , RELEASE=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 02:52:04 localhost systemd[1]: libpod-conmon-8c0d9b4154e4430ef9bee4af92207e103d3371ba3752ab466d9ec37cc710a559.scope: Deactivated successfully. Dec 5 02:52:04 localhost ceph-osd[31386]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 5 02:52:04 localhost ceph-osd[31386]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b300e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:04 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:04 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:04 localhost ceph-osd[31386]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Dec 5 02:52:04 localhost ceph-osd[31386]: bluefs mount Dec 5 02:52:04 localhost ceph-osd[31386]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 5 02:52:04 localhost ceph-osd[31386]: bluefs mount shared_bdev_used = 0 Dec 5 02:52:04 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: RocksDB version: 7.9.2 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Git sha 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: DB SUMMARY Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: DB Session ID: 1DDGVPBEDB4H23QKN4U7 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: CURRENT file: CURRENT Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: IDENTITY file: IDENTITY Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.error_if_exists: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.create_if_missing: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.paranoid_checks: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.env: 0x55692b594cb0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.fs: LegacyFileSystem Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.info_log: 0x55692c29c740 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_file_opening_threads: 16 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.statistics: (nil) Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.use_fsync: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_log_file_size: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.log_file_time_to_roll: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.keep_log_file_num: 1000 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.recycle_log_file_num: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.allow_fallocate: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.allow_mmap_reads: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.allow_mmap_writes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.use_direct_reads: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.create_missing_column_families: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.db_log_dir: Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.wal_dir: db.wal Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.table_cache_numshardbits: 6 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.advise_random_on_open: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.db_write_buffer_size: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_manager: 0x55692b2ea140 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.use_adaptive_mutex: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.rate_limiter: (nil) Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.wal_recovery_mode: 2 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.enable_thread_tracking: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.enable_pipelined_write: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.unordered_write: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.row_cache: None Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.wal_filter: None Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.allow_ingest_behind: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.two_write_queues: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.manual_wal_flush: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.wal_compression: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.atomic_flush: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.persist_stats_to_disk: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.log_readahead_size: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.best_efforts_recovery: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.allow_data_in_errors: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.db_host_id: __hostname__ Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.enforce_single_del_contracts: true Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_background_jobs: 4 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_background_compactions: -1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_subcompactions: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.delayed_write_rate : 16777216 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.stats_dump_period_sec: 600 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.stats_persist_period_sec: 600 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_open_files: -1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bytes_per_sync: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_background_flushes: -1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Compression algorithms supported: Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kZSTD supported: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kXpressCompression supported: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kBZip2Compression supported: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kLZ4Compression supported: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kZlibCompression supported: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: #011kSnappyCompression supported: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: DMutex implementation: pthread_mutex_t Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:04 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29c900)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d8850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29cb20)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29cb20)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29cb20)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7772c3b2-e175-4a70-a6b4-6fb6cc901de0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125006010, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125006396, "job": 1, "event": "recovery_finished"} Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000 Dec 5 02:52:05 localhost ceph-osd[31386]: freelist init Dec 5 02:52:05 localhost ceph-osd[31386]: freelist _read_cfg Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 5 02:52:05 localhost ceph-osd[31386]: bluefs umount Dec 5 02:52:05 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) close Dec 5 02:52:05 localhost podman[31725]: Dec 5 02:52:05 localhost podman[31725]: 2025-12-05 07:52:05.093736381 +0000 UTC m=+0.071175406 container create 6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, GIT_CLEAN=True, RELEASE=main) Dec 5 02:52:05 localhost systemd[1]: Started libpod-conmon-6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd.scope. Dec 5 02:52:05 localhost systemd[1]: Started libcrun container. Dec 5 02:52:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b0086faf255530dc1a7b7c1804d6a38fa36b53c109d96d54fcf53efe90557/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:05 localhost podman[31725]: 2025-12-05 07:52:05.067414567 +0000 UTC m=+0.044853562 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b0086faf255530dc1a7b7c1804d6a38fa36b53c109d96d54fcf53efe90557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b0086faf255530dc1a7b7c1804d6a38fa36b53c109d96d54fcf53efe90557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b0086faf255530dc1a7b7c1804d6a38fa36b53c109d96d54fcf53efe90557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b0086faf255530dc1a7b7c1804d6a38fa36b53c109d96d54fcf53efe90557/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:05 localhost podman[31725]: 2025-12-05 07:52:05.216939926 +0000 UTC m=+0.194378921 container init 6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, ceph=True) Dec 5 02:52:05 localhost podman[31725]: 2025-12-05 07:52:05.227498639 +0000 UTC m=+0.204937634 container start 6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 02:52:05 localhost podman[31725]: 2025-12-05 07:52:05.227801338 +0000 UTC m=+0.205240373 container attach 6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, version=7, RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_CLEAN=True) Dec 5 02:52:05 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 5 02:52:05 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 5 02:52:05 localhost ceph-osd[31386]: bdev(0x55692b301180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:05 localhost ceph-osd[31386]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Dec 5 02:52:05 localhost ceph-osd[31386]: bluefs mount Dec 5 02:52:05 localhost ceph-osd[31386]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 5 02:52:05 localhost ceph-osd[31386]: bluefs mount shared_bdev_used = 4718592 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: RocksDB version: 7.9.2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Git sha 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: DB SUMMARY Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: DB Session ID: 1DDGVPBEDB4H23QKN4U6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: CURRENT file: CURRENT Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: IDENTITY file: IDENTITY Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.error_if_exists: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.create_if_missing: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.env: 0x55692b33c690 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.fs: LegacyFileSystem Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.info_log: 0x55692c3043a0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_file_opening_threads: 16 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.statistics: (nil) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.use_fsync: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_log_file_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.log_file_time_to_roll: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.keep_log_file_num: 1000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.recycle_log_file_num: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.allow_fallocate: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.allow_mmap_reads: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.allow_mmap_writes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.use_direct_reads: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.create_missing_column_families: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.db_log_dir: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.wal_dir: db.wal Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_cache_numshardbits: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.advise_random_on_open: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.db_write_buffer_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_manager: 0x55692b2eb5e0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.use_adaptive_mutex: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.rate_limiter: (nil) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.wal_recovery_mode: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_thread_tracking: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_pipelined_write: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.unordered_write: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.row_cache: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.wal_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.allow_ingest_behind: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.two_write_queues: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.manual_wal_flush: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.wal_compression: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.atomic_flush: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.persist_stats_to_disk: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.log_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.best_efforts_recovery: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.allow_data_in_errors: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.db_host_id: __hostname__ Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enforce_single_del_contracts: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_background_jobs: 4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_background_compactions: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_subcompactions: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.delayed_write_rate : 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.stats_dump_period_sec: 600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.stats_persist_period_sec: 600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_open_files: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bytes_per_sync: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_background_flushes: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Compression algorithms supported: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kZSTD supported: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kXpressCompression supported: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kBZip2Compression supported: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kLZ4Compression supported: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kZlibCompression supported: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: #011kSnappyCompression supported: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: DMutex implementation: pthread_mutex_t Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c304520)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d82d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29d640)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d9610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29d640)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d9610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.merge_operator: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55692c29d640)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55692b2d9610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression: LZ4 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.num_levels: 7 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7772c3b2-e175-4a70-a6b4-6fb6cc901de0 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125285705, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125292154, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921125, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7772c3b2-e175-4a70-a6b4-6fb6cc901de0", "db_session_id": "1DDGVPBEDB4H23QKN4U6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125295773, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921125, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7772c3b2-e175-4a70-a6b4-6fb6cc901de0", "db_session_id": "1DDGVPBEDB4H23QKN4U6", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125299779, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921125, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7772c3b2-e175-4a70-a6b4-6fb6cc901de0", "db_session_id": "1DDGVPBEDB4H23QKN4U6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921125303699, "job": 1, "event": "recovery_finished"} Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55692b39a700 Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: DB pointer 0x55692c1f1a00 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4 Dec 5 02:52:05 localhost ceph-osd[31386]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 02:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Bloc Dec 5 02:52:05 localhost ceph-osd[31386]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 5 02:52:05 localhost ceph-osd[31386]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 5 02:52:05 localhost ceph-osd[31386]: _get_class not permitted to load lua Dec 5 02:52:05 localhost ceph-osd[31386]: _get_class not permitted to load sdk Dec 5 02:52:05 localhost ceph-osd[31386]: _get_class not permitted to load test_remote_reads Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 load_pgs Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 load_pgs opened 0 pgs Dec 5 02:52:05 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0[31382]: 2025-12-05T07:52:05.339+0000 7fe9744f0a80 -1 osd.0 0 log_to_monitors true Dec 5 02:52:05 localhost ceph-osd[31386]: osd.0 0 log_to_monitors true Dec 5 02:52:05 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test[31740]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 5 02:52:05 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test[31740]: [--no-systemd] [--no-tmpfs] Dec 5 02:52:05 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test[31740]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 5 02:52:05 localhost systemd[1]: libpod-6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd.scope: Deactivated successfully. Dec 5 02:52:05 localhost podman[31725]: 2025-12-05 07:52:05.482490241 +0000 UTC m=+0.459929216 container died 6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 02:52:05 localhost podman[31960]: 2025-12-05 07:52:05.55610199 +0000 UTC m=+0.064931775 container remove 6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate-test, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 5 02:52:05 localhost systemd[1]: libpod-conmon-6b95ec4d0d7a0b9383b04922015ad22d91d8258bb0bc660471017e472b6606bd.scope: Deactivated successfully. Dec 5 02:52:05 localhost systemd[1]: var-lib-containers-storage-overlay-535b0086faf255530dc1a7b7c1804d6a38fa36b53c109d96d54fcf53efe90557-merged.mount: Deactivated successfully. Dec 5 02:52:05 localhost systemd[1]: Reloading. Dec 5 02:52:05 localhost systemd-rc-local-generator[32012]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:52:05 localhost systemd-sysv-generator[32016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:52:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:52:06 localhost systemd[1]: Reloading. Dec 5 02:52:06 localhost systemd-rc-local-generator[32054]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:52:06 localhost systemd-sysv-generator[32058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:52:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:52:06 localhost systemd[1]: Starting Ceph osd.3 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 02:52:06 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 5 02:52:06 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 5 02:52:06 localhost podman[32122]: Dec 5 02:52:06 localhost podman[32122]: 2025-12-05 07:52:06.657512889 +0000 UTC m=+0.061635075 container create 447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 02:52:06 localhost systemd[1]: Started libcrun container. Dec 5 02:52:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e8cfbceb7e42d09106230acd7d87e1800811bf7a41e07e37da84b2dd18b379/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e8cfbceb7e42d09106230acd7d87e1800811bf7a41e07e37da84b2dd18b379/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e8cfbceb7e42d09106230acd7d87e1800811bf7a41e07e37da84b2dd18b379/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e8cfbceb7e42d09106230acd7d87e1800811bf7a41e07e37da84b2dd18b379/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e8cfbceb7e42d09106230acd7d87e1800811bf7a41e07e37da84b2dd18b379/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:06 localhost podman[32122]: 2025-12-05 07:52:06.730562381 +0000 UTC m=+0.134684567 container init 447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate, ceph=True, name=rhceph, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Dec 5 02:52:06 localhost podman[32122]: 2025-12-05 07:52:06.73610272 +0000 UTC m=+0.140224896 container start 447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.expose-services=, GIT_BRANCH=main) Dec 5 02:52:06 localhost podman[32122]: 2025-12-05 07:52:06.636732244 +0000 UTC m=+0.040854430 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:06 localhost podman[32122]: 2025-12-05 07:52:06.736279755 +0000 UTC m=+0.140401941 container attach 447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate, io.buildah.version=1.41.4, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, vcs-type=git, build-date=2025-11-26T19:44:28Z) Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 done with init, starting boot process Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 start_boot Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 5 02:52:07 localhost ceph-osd[31386]: osd.0 0 bench count 12288000 bsize 4 KiB Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 5 02:52:07 localhost bash[32122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 5 02:52:07 localhost bash[32122]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 5 02:52:07 localhost bash[32122]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 5 02:52:07 localhost bash[32122]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:07 localhost bash[32122]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 5 02:52:07 localhost bash[32122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 5 02:52:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate[32137]: --> ceph-volume raw activate successful for osd ID: 3 Dec 5 02:52:07 localhost bash[32122]: --> ceph-volume raw activate successful for osd ID: 3 Dec 5 02:52:07 localhost systemd[1]: libpod-447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2.scope: Deactivated successfully. Dec 5 02:52:07 localhost podman[32259]: 2025-12-05 07:52:07.486920105 +0000 UTC m=+0.060038686 container died 447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main) Dec 5 02:52:07 localhost systemd[1]: tmp-crun.lakqlE.mount: Deactivated successfully. Dec 5 02:52:07 localhost systemd[1]: var-lib-containers-storage-overlay-26e8cfbceb7e42d09106230acd7d87e1800811bf7a41e07e37da84b2dd18b379-merged.mount: Deactivated successfully. Dec 5 02:52:07 localhost podman[32259]: 2025-12-05 07:52:07.563151634 +0000 UTC m=+0.136270185 container remove 447b0a3f1254c3c88b71eed1880439df69e3c4c727a06a17be7caa7a9853bab2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3-activate, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=1763362218, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 02:52:07 localhost podman[32318]: Dec 5 02:52:07 localhost podman[32318]: 2025-12-05 07:52:07.865160813 +0000 UTC m=+0.066331537 container create 831eca62f8d3db72ff12459d4820b1a2da0a52f081f886c0e06149ddc3936dd1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3, release=1763362218, version=7, ceph=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=) Dec 5 02:52:07 localhost systemd[1]: tmp-crun.NsGgJg.mount: Deactivated successfully. Dec 5 02:52:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc63532dee49a0738e3595a714b884ed9d44e8fd1afb762d31769b2d863f41d3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc63532dee49a0738e3595a714b884ed9d44e8fd1afb762d31769b2d863f41d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:07 localhost podman[32318]: 2025-12-05 07:52:07.825995916 +0000 UTC m=+0.027166620 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc63532dee49a0738e3595a714b884ed9d44e8fd1afb762d31769b2d863f41d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc63532dee49a0738e3595a714b884ed9d44e8fd1afb762d31769b2d863f41d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc63532dee49a0738e3595a714b884ed9d44e8fd1afb762d31769b2d863f41d3/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:07 localhost podman[32318]: 2025-12-05 07:52:07.974935028 +0000 UTC m=+0.176105712 container init 831eca62f8d3db72ff12459d4820b1a2da0a52f081f886c0e06149ddc3936dd1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True) Dec 5 02:52:07 localhost podman[32318]: 2025-12-05 07:52:07.994890448 +0000 UTC m=+0.196061132 container start 831eca62f8d3db72ff12459d4820b1a2da0a52f081f886c0e06149ddc3936dd1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, version=7, io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 5 02:52:07 localhost bash[32318]: 831eca62f8d3db72ff12459d4820b1a2da0a52f081f886c0e06149ddc3936dd1 Dec 5 02:52:08 localhost systemd[1]: Started Ceph osd.3 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 02:52:08 localhost ceph-osd[32336]: set uid:gid to 167:167 (ceph:ceph) Dec 5 02:52:08 localhost ceph-osd[32336]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 5 02:52:08 localhost ceph-osd[32336]: pidfile_write: ignore empty --pid-file Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) close Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) close Dec 5 02:52:08 localhost ceph-osd[32336]: starting osd.3 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal Dec 5 02:52:08 localhost ceph-osd[32336]: load: jerasure load: lrc Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) close Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) close Dec 5 02:52:08 localhost ceph-osd[32336]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 5 02:52:08 localhost ceph-osd[32336]: osd.3:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763452e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs mount Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs mount shared_bdev_used = 0 Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: RocksDB version: 7.9.2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Git sha 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: DB SUMMARY Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: DB Session ID: 0I9XHW9FH9W8VRW8PCZB Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: CURRENT file: CURRENT Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: IDENTITY file: IDENTITY Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.error_if_exists: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.create_if_missing: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.env: 0x55c7636e6cb0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.fs: LegacyFileSystem Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.info_log: 0x55c7643f0d00 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_file_opening_threads: 16 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.statistics: (nil) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_fsync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_log_file_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.log_file_time_to_roll: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.keep_log_file_num: 1000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.recycle_log_file_num: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_fallocate: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_mmap_reads: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_mmap_writes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_direct_reads: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.create_missing_column_families: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.db_log_dir: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_dir: db.wal Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_cache_numshardbits: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.advise_random_on_open: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.db_write_buffer_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_manager: 0x55c76343c140 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_adaptive_mutex: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.rate_limiter: (nil) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_recovery_mode: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_thread_tracking: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_pipelined_write: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.unordered_write: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.row_cache: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_ingest_behind: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.two_write_queues: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.manual_wal_flush: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_compression: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.atomic_flush: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.persist_stats_to_disk: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.log_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.best_efforts_recovery: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_data_in_errors: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.db_host_id: __hostname__ Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enforce_single_del_contracts: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_background_jobs: 4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_background_compactions: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_subcompactions: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.delayed_write_rate : 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.stats_dump_period_sec: 600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.stats_persist_period_sec: 600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_open_files: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bytes_per_sync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_background_flushes: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Compression algorithms supported: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kZSTD supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kXpressCompression supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kBZip2Compression supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kLZ4Compression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kZlibCompression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kSnappyCompression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: DMutex implementation: pthread_mutex_t Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f0ec0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f10e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f10e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7643f10e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b3cdf2c8-3494-44d0-846b-3c392dac6fb7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128636806, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128640302, "job": 1, "event": "recovery_finished"} Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old nid_max 1025 Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old blobid_max 10240 Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta min_alloc_size 0x1000 Dec 5 02:52:08 localhost ceph-osd[32336]: freelist init Dec 5 02:52:08 localhost ceph-osd[32336]: freelist _read_cfg Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs umount Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) close Dec 5 02:52:08 localhost podman[32618]: Dec 5 02:52:08 localhost podman[32618]: 2025-12-05 07:52:08.800425704 +0000 UTC m=+0.073727043 container create 801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_payne, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph) Dec 5 02:52:08 localhost systemd[1]: Started libpod-conmon-801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c.scope. Dec 5 02:52:08 localhost systemd[1]: Started libcrun container. Dec 5 02:52:08 localhost podman[32618]: 2025-12-05 07:52:08.861651526 +0000 UTC m=+0.134952895 container init 801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_payne, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7) Dec 5 02:52:08 localhost podman[32618]: 2025-12-05 07:52:08.762456665 +0000 UTC m=+0.035758034 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:08 localhost relaxed_payne[32633]: 167 167 Dec 5 02:52:08 localhost systemd[1]: libpod-801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c.scope: Deactivated successfully. Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 5 02:52:08 localhost ceph-osd[32336]: bdev(0x55c763453180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs mount Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 5 02:52:08 localhost ceph-osd[32336]: bluefs mount shared_bdev_used = 4718592 Dec 5 02:52:08 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: RocksDB version: 7.9.2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Git sha 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: DB SUMMARY Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: DB Session ID: 0I9XHW9FH9W8VRW8PCZA Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: CURRENT file: CURRENT Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: IDENTITY file: IDENTITY Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.error_if_exists: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.create_if_missing: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.env: 0x55c763496690 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.fs: LegacyFileSystem Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.info_log: 0x55c7634f28a0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_file_opening_threads: 16 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.statistics: (nil) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_fsync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_log_file_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.log_file_time_to_roll: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.keep_log_file_num: 1000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.recycle_log_file_num: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_fallocate: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_mmap_reads: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_mmap_writes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_direct_reads: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.create_missing_column_families: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.db_log_dir: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_dir: db.wal Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_cache_numshardbits: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.advise_random_on_open: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.db_write_buffer_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_manager: 0x55c76343d5e0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.use_adaptive_mutex: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.rate_limiter: (nil) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_recovery_mode: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_thread_tracking: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_pipelined_write: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.unordered_write: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.row_cache: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_ingest_behind: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.two_write_queues: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.manual_wal_flush: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_compression: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.atomic_flush: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.persist_stats_to_disk: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.log_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.best_efforts_recovery: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.allow_data_in_errors: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.db_host_id: __hostname__ Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enforce_single_del_contracts: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_background_jobs: 4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_background_compactions: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_subcompactions: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.delayed_write_rate : 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.stats_dump_period_sec: 600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.stats_persist_period_sec: 600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_open_files: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bytes_per_sync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_background_flushes: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Compression algorithms supported: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kZSTD supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kXpressCompression supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kBZip2Compression supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kLZ4Compression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kZlibCompression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: #011kSnappyCompression supported: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: DMutex implementation: pthread_mutex_t Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost podman[32618]: 2025-12-05 07:52:08.890157377 +0000 UTC m=+0.163458746 container start 801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_payne, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z) Dec 5 02:52:08 localhost podman[32618]: 2025-12-05 07:52:08.890544068 +0000 UTC m=+0.163845467 container attach 801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_payne, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 5 02:52:08 localhost podman[32618]: 2025-12-05 07:52:08.893652553 +0000 UTC m=+0.166953902 container died 801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_payne, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, ceph=True, io.buildah.version=1.41.4, name=rhceph) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f2940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f3920)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f3920)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.merge_operator: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_filter_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.sst_partitioner_factory: None Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c7634f3920)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55c76342b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.write_buffer_size: 16777216 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number: 64 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression: LZ4 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression: Disabled Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.num_levels: 7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.level: 32767 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.enabled: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.arena_block_size: 1048576 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_support: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.bloom_locality: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.max_successive_merges: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.force_consistency_checks: 1 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.ttl: 2592000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_files: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.min_blob_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_size: 268435456 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b3cdf2c8-3494-44d0-846b-3c392dac6fb7 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128905524, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128926237, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921128, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b3cdf2c8-3494-44d0-846b-3c392dac6fb7", "db_session_id": "0I9XHW9FH9W8VRW8PCZA", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128932755, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921128, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b3cdf2c8-3494-44d0-846b-3c392dac6fb7", "db_session_id": "0I9XHW9FH9W8VRW8PCZA", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 5 02:52:08 localhost systemd[1]: var-lib-containers-storage-overlay-09331c5a12eedc78aefb93e2cd11eddb4afaeb011818f0941e8543fb003184c0-merged.mount: Deactivated successfully. Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128953013, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764921128, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b3cdf2c8-3494-44d0-846b-3c392dac6fb7", "db_session_id": "0I9XHW9FH9W8VRW8PCZA", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764921128963799, "job": 1, "event": "recovery_finished"} Dec 5 02:52:08 localhost ceph-osd[32336]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 5 02:52:08 localhost podman[32639]: 2025-12-05 07:52:08.986528792 +0000 UTC m=+0.098011786 container remove 801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_payne, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, version=7, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , RELEASE=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 02:52:08 localhost systemd[1]: libpod-conmon-801335b156657e1c4ea9222010e01d7a2df580175f9be0a30676b5ed459a413c.scope: Deactivated successfully. Dec 5 02:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c7634ec700 Dec 5 02:52:09 localhost ceph-osd[32336]: rocksdb: DB pointer 0x55c764345a00 Dec 5 02:52:09 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 5 02:52:09 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super from 4, latest 4 Dec 5 02:52:09 localhost ceph-osd[32336]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super done Dec 5 02:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 02:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 460.80 MB usage: 0 Dec 5 02:52:09 localhost ceph-osd[32336]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 5 02:52:09 localhost ceph-osd[32336]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 5 02:52:09 localhost ceph-osd[32336]: _get_class not permitted to load lua Dec 5 02:52:09 localhost ceph-osd[32336]: _get_class not permitted to load sdk Dec 5 02:52:09 localhost ceph-osd[32336]: _get_class not permitted to load test_remote_reads Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 load_pgs Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 load_pgs opened 0 pgs Dec 5 02:52:09 localhost ceph-osd[32336]: osd.3 0 log_to_monitors true Dec 5 02:52:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3[32332]: 2025-12-05T07:52:09.061+0000 7f24f4b95a80 -1 osd.3 0 log_to_monitors true Dec 5 02:52:09 localhost podman[32839]: Dec 5 02:52:09 localhost podman[32839]: 2025-12-05 07:52:09.1428895 +0000 UTC m=+0.079994175 container create 0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.buildah.version=1.41.4, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container) Dec 5 02:52:09 localhost systemd[1]: Started libpod-conmon-0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897.scope. Dec 5 02:52:09 localhost systemd[1]: Started libcrun container. Dec 5 02:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecded62f2dfc2fe0de7f1d5524353c78b5f1c113fbbe9158b7602b378e1dd3f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:09 localhost podman[32839]: 2025-12-05 07:52:09.108956243 +0000 UTC m=+0.046060918 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecded62f2dfc2fe0de7f1d5524353c78b5f1c113fbbe9158b7602b378e1dd3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecded62f2dfc2fe0de7f1d5524353c78b5f1c113fbbe9158b7602b378e1dd3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:09 localhost podman[32839]: 2025-12-05 07:52:09.247382874 +0000 UTC m=+0.184487509 container init 0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, io.openshift.expose-services=, io.buildah.version=1.41.4, version=7, CEPH_POINT_RELEASE=, release=1763362218, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 02:52:09 localhost podman[32839]: 2025-12-05 07:52:09.27935186 +0000 UTC m=+0.216456515 container start 0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, vcs-type=git) Dec 5 02:52:09 localhost podman[32839]: 2025-12-05 07:52:09.27967715 +0000 UTC m=+0.216781875 container attach 0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 02:52:09 localhost optimistic_engelbart[32887]: { Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "19ea0b05-8172-42a5-84ef-029f92f127e6": { Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "ceph_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b", Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "osd_id": 3, Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "osd_uuid": "19ea0b05-8172-42a5-84ef-029f92f127e6", Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "type": "bluestore" Dec 5 02:52:09 localhost optimistic_engelbart[32887]: }, Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "5bd712dc-95e5-4c48-be82-844b93762be9": { Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "ceph_fsid": "79feddb1-4bfc-557f-83b9-0d57c9f66c1b", Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "osd_id": 0, Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "osd_uuid": "5bd712dc-95e5-4c48-be82-844b93762be9", Dec 5 02:52:09 localhost optimistic_engelbart[32887]: "type": "bluestore" Dec 5 02:52:09 localhost optimistic_engelbart[32887]: } Dec 5 02:52:09 localhost optimistic_engelbart[32887]: } Dec 5 02:52:09 localhost systemd[1]: libpod-0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897.scope: Deactivated successfully. Dec 5 02:52:09 localhost podman[32839]: 2025-12-05 07:52:09.80586174 +0000 UTC m=+0.742966405 container died 0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git) Dec 5 02:52:09 localhost systemd[1]: var-lib-containers-storage-overlay-aecded62f2dfc2fe0de7f1d5524353c78b5f1c113fbbe9158b7602b378e1dd3f-merged.mount: Deactivated successfully. Dec 5 02:52:09 localhost podman[32923]: 2025-12-05 07:52:09.930889261 +0000 UTC m=+0.112717456 container remove 0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_engelbart, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 02:52:09 localhost systemd[1]: libpod-conmon-0399813617b1bb38149320bd1b3ab02b61a055194c5d997c0739822083d07897.scope: Deactivated successfully. Dec 5 02:52:10 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 5 02:52:10 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 done with init, starting boot process Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 start_boot Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 5 02:52:10 localhost ceph-osd[32336]: osd.3 0 bench count 12288000 bsize 4 KiB Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.884 iops: 2530.238 elapsed_sec: 1.186 Dec 5 02:52:11 localhost ceph-osd[31386]: log_channel(cluster) log [WRN] : OSD bench result of 2530.238411 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 0 waiting for initial osdmap Dec 5 02:52:11 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0[31382]: 2025-12-05T07:52:11.425+0000 7fe97046f640 -1 osd.0 0 waiting for initial osdmap Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 crush map has features 288514050185494528, adjusting msgr requires for clients Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 crush map has features 3314932999778484224, adjusting msgr requires for osds Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 check_osdmap_features require_osd_release unknown -> reef Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 set_numa_affinity not setting numa affinity Dec 5 02:52:11 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-0[31382]: 2025-12-05T07:52:11.492+0000 7fe96ba99640 -1 osd.0 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 5 02:52:11 localhost ceph-osd[31386]: osd.0 13 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Dec 5 02:52:12 localhost ceph-osd[31386]: osd.0 13 tick checking mon for new map Dec 5 02:52:12 localhost ceph-osd[31386]: osd.0 15 state: booting -> active Dec 5 02:52:13 localhost systemd[1]: tmp-crun.cl9oFy.mount: Deactivated successfully. Dec 5 02:52:13 localhost systemd[26052]: Starting Mark boot as successful... Dec 5 02:52:13 localhost podman[33052]: 2025-12-05 07:52:13.522487128 +0000 UTC m=+0.094398266 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, version=7) Dec 5 02:52:13 localhost systemd[26052]: Finished Mark boot as successful. Dec 5 02:52:13 localhost podman[33052]: 2025-12-05 07:52:13.643056503 +0000 UTC m=+0.214967711 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True) Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.249 iops: 5951.837 elapsed_sec: 0.504 Dec 5 02:52:14 localhost ceph-osd[32336]: log_channel(cluster) log [WRN] : OSD bench result of 5951.836867 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 0 waiting for initial osdmap Dec 5 02:52:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3[32332]: 2025-12-05T07:52:14.306+0000 7f24f1329640 -1 osd.3 0 waiting for initial osdmap Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 crush map has features 288514050185494528, adjusting msgr requires for clients Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 crush map has features 3314932999778484224, adjusting msgr requires for osds Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 check_osdmap_features require_osd_release unknown -> reef Dec 5 02:52:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-osd-3[32332]: 2025-12-05T07:52:14.365+0000 7f24ec13e640 -1 osd.3 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 set_numa_affinity not setting numa affinity Dec 5 02:52:14 localhost ceph-osd[32336]: osd.3 16 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Dec 5 02:52:15 localhost ceph-osd[32336]: osd.3 17 state: booting -> active Dec 5 02:52:15 localhost ceph-osd[32336]: osd.3 17 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 5 02:52:15 localhost ceph-osd[32336]: osd.3 17 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Dec 5 02:52:15 localhost ceph-osd[32336]: osd.3 17 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 5 02:52:15 localhost ceph-osd[31386]: osd.0 17 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 5 02:52:15 localhost ceph-osd[31386]: osd.0 17 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Dec 5 02:52:15 localhost ceph-osd[31386]: osd.0 17 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 5 02:52:15 localhost podman[33250]: Dec 5 02:52:15 localhost podman[33250]: 2025-12-05 07:52:15.497262076 +0000 UTC m=+0.073604871 container create 35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_germain, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , release=1763362218, io.openshift.expose-services=, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 02:52:15 localhost systemd[1]: Started libpod-conmon-35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0.scope. Dec 5 02:52:15 localhost systemd[1]: Started libcrun container. Dec 5 02:52:15 localhost podman[33250]: 2025-12-05 07:52:15.563683135 +0000 UTC m=+0.140025890 container init 35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_germain, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1763362218, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7) Dec 5 02:52:15 localhost podman[33250]: 2025-12-05 07:52:15.467075553 +0000 UTC m=+0.043418348 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:15 localhost podman[33250]: 2025-12-05 07:52:15.575414734 +0000 UTC m=+0.151757529 container start 35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_germain, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 02:52:15 localhost podman[33250]: 2025-12-05 07:52:15.575756825 +0000 UTC m=+0.152099700 container attach 35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_germain, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 02:52:15 localhost strange_germain[33265]: 167 167 Dec 5 02:52:15 localhost systemd[1]: libpod-35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0.scope: Deactivated successfully. Dec 5 02:52:15 localhost podman[33250]: 2025-12-05 07:52:15.578911201 +0000 UTC m=+0.155253966 container died 35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_germain, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 02:52:15 localhost systemd[1]: tmp-crun.YhAHmK.mount: Deactivated successfully. Dec 5 02:52:15 localhost podman[33271]: 2025-12-05 07:52:15.672776929 +0000 UTC m=+0.083860814 container remove 35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_germain, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1763362218, GIT_BRANCH=main) Dec 5 02:52:15 localhost systemd[1]: libpod-conmon-35367b13a7915fbdb2ec65c2594853123986c74950fb2866e8a5ada550de8fb0.scope: Deactivated successfully. Dec 5 02:52:15 localhost podman[33293]: Dec 5 02:52:15 localhost podman[33293]: 2025-12-05 07:52:15.878712893 +0000 UTC m=+0.067966478 container create 5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_pare, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7) Dec 5 02:52:15 localhost systemd[1]: Started libpod-conmon-5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e.scope. Dec 5 02:52:15 localhost systemd[1]: Started libcrun container. Dec 5 02:52:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001c6c485ad9136656641cb7ca026d1690ca98832751cb749b981e473f0dc69f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:15 localhost podman[33293]: 2025-12-05 07:52:15.852685497 +0000 UTC m=+0.041939092 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 02:52:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001c6c485ad9136656641cb7ca026d1690ca98832751cb749b981e473f0dc69f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001c6c485ad9136656641cb7ca026d1690ca98832751cb749b981e473f0dc69f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 02:52:15 localhost podman[33293]: 2025-12-05 07:52:15.979469582 +0000 UTC m=+0.168723167 container init 5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_pare, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, release=1763362218, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 02:52:15 localhost podman[33293]: 2025-12-05 07:52:15.989844028 +0000 UTC m=+0.179097603 container start 5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_pare, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux ) Dec 5 02:52:15 localhost podman[33293]: 2025-12-05 07:52:15.990122597 +0000 UTC m=+0.179376172 container attach 5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_pare, GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 5 02:52:16 localhost ceph-osd[32336]: osd.3 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2,3] r=1 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 02:52:16 localhost systemd[1]: var-lib-containers-storage-overlay-9035cd2eb2545c50239995f2caef0fe9c686a235173917088c7864d849cbc49e-merged.mount: Deactivated successfully. Dec 5 02:52:16 localhost dreamy_pare[33308]: [ Dec 5 02:52:16 localhost dreamy_pare[33308]: { Dec 5 02:52:16 localhost dreamy_pare[33308]: "available": false, Dec 5 02:52:16 localhost dreamy_pare[33308]: "ceph_device": false, Dec 5 02:52:16 localhost dreamy_pare[33308]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 02:52:16 localhost dreamy_pare[33308]: "lsm_data": {}, Dec 5 02:52:16 localhost dreamy_pare[33308]: "lvs": [], Dec 5 02:52:16 localhost dreamy_pare[33308]: "path": "/dev/sr0", Dec 5 02:52:16 localhost dreamy_pare[33308]: "rejected_reasons": [ Dec 5 02:52:16 localhost dreamy_pare[33308]: "Insufficient space (<5GB)", Dec 5 02:52:16 localhost dreamy_pare[33308]: "Has a FileSystem" Dec 5 02:52:16 localhost dreamy_pare[33308]: ], Dec 5 02:52:16 localhost dreamy_pare[33308]: "sys_api": { Dec 5 02:52:16 localhost dreamy_pare[33308]: "actuators": null, Dec 5 02:52:16 localhost dreamy_pare[33308]: "device_nodes": "sr0", Dec 5 02:52:16 localhost dreamy_pare[33308]: "human_readable_size": "482.00 KB", Dec 5 02:52:16 localhost dreamy_pare[33308]: "id_bus": "ata", Dec 5 02:52:16 localhost dreamy_pare[33308]: "model": "QEMU DVD-ROM", Dec 5 02:52:16 localhost dreamy_pare[33308]: "nr_requests": "2", Dec 5 02:52:16 localhost dreamy_pare[33308]: "partitions": {}, Dec 5 02:52:16 localhost dreamy_pare[33308]: "path": "/dev/sr0", Dec 5 02:52:16 localhost dreamy_pare[33308]: "removable": "1", Dec 5 02:52:16 localhost dreamy_pare[33308]: "rev": "2.5+", Dec 5 02:52:16 localhost dreamy_pare[33308]: "ro": "0", Dec 5 02:52:16 localhost dreamy_pare[33308]: "rotational": "1", Dec 5 02:52:16 localhost dreamy_pare[33308]: "sas_address": "", Dec 5 02:52:16 localhost dreamy_pare[33308]: "sas_device_handle": "", Dec 5 02:52:16 localhost dreamy_pare[33308]: "scheduler_mode": "mq-deadline", Dec 5 02:52:16 localhost dreamy_pare[33308]: "sectors": 0, Dec 5 02:52:16 localhost dreamy_pare[33308]: "sectorsize": "2048", Dec 5 02:52:16 localhost dreamy_pare[33308]: "size": 493568.0, Dec 5 02:52:16 localhost dreamy_pare[33308]: "support_discard": "0", Dec 5 02:52:16 localhost dreamy_pare[33308]: "type": "disk", Dec 5 02:52:16 localhost dreamy_pare[33308]: "vendor": "QEMU" Dec 5 02:52:16 localhost dreamy_pare[33308]: } Dec 5 02:52:16 localhost dreamy_pare[33308]: } Dec 5 02:52:16 localhost dreamy_pare[33308]: ] Dec 5 02:52:16 localhost systemd[1]: libpod-5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e.scope: Deactivated successfully. Dec 5 02:52:16 localhost podman[33293]: 2025-12-05 07:52:16.86853278 +0000 UTC m=+1.057786355 container died 5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_pare, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main) Dec 5 02:52:16 localhost systemd[1]: tmp-crun.ODwguT.mount: Deactivated successfully. Dec 5 02:52:16 localhost systemd[1]: var-lib-containers-storage-overlay-001c6c485ad9136656641cb7ca026d1690ca98832751cb749b981e473f0dc69f-merged.mount: Deactivated successfully. Dec 5 02:52:16 localhost podman[34781]: 2025-12-05 07:52:16.944752899 +0000 UTC m=+0.069579506 container remove 5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_pare, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 5 02:52:16 localhost systemd[1]: libpod-conmon-5ac5427732d663f50420ab81ee635756b0e6f96b7d316c35c1be9958e7831f1e.scope: Deactivated successfully. Dec 5 02:52:18 localhost ceph-osd[32336]: osd.3 pg_epoch: 20 pg[1.0( v 18'76 (0'0,18'76] local-lis/les=17/18 n=2 ec=17/17 lis/c=17/0 les/c/f=18/0/0 sis=20 pruub=13.939910889s) [2,4,3] r=2 lpr=20 pi=[17,20)/1 luod=0'0 lua=0'0 crt=18'76 lcod 18'75 mlcod 0'0 active pruub 23.377237320s@ mbc={}] start_peering_interval up [2,3] -> [2,4,3], acting [2,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 02:52:18 localhost ceph-osd[32336]: osd.3 pg_epoch: 20 pg[1.0( v 18'76 (0'0,18'76] local-lis/les=17/18 n=2 ec=17/17 lis/c=17/0 les/c/f=18/0/0 sis=20 pruub=13.939829826s) [2,4,3] r=2 lpr=20 pi=[17,20)/1 crt=18'76 lcod 18'75 mlcod 0'0 unknown NOTIFY pruub 23.377237320s@ mbc={}] state: transitioning to Stray Dec 5 02:52:26 localhost podman[34910]: 2025-12-05 07:52:26.991766078 +0000 UTC m=+0.068567777 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 02:52:27 localhost podman[34910]: 2025-12-05 07:52:27.100581504 +0000 UTC m=+0.177383263 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, release=1763362218) Dec 5 02:53:28 localhost podman[35090]: 2025-12-05 07:53:28.92617095 +0000 UTC m=+0.087957344 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, version=7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 02:53:29 localhost podman[35090]: 2025-12-05 07:53:29.049489862 +0000 UTC m=+0.211276246 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main) Dec 5 02:53:34 localhost systemd[1]: session-13.scope: Deactivated successfully. Dec 5 02:53:34 localhost systemd[1]: session-13.scope: Consumed 20.972s CPU time. Dec 5 02:53:34 localhost systemd-logind[760]: Session 13 logged out. Waiting for processes to exit. Dec 5 02:53:34 localhost systemd-logind[760]: Removed session 13. Dec 5 02:55:29 localhost systemd[26052]: Created slice User Background Tasks Slice. Dec 5 02:55:29 localhost systemd[26052]: Starting Cleanup of User's Temporary Files and Directories... Dec 5 02:55:29 localhost systemd[26052]: Finished Cleanup of User's Temporary Files and Directories. Dec 5 02:56:58 localhost sshd[35465]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:56:59 localhost systemd-logind[760]: New session 27 of user zuul. Dec 5 02:56:59 localhost systemd[1]: Started Session 27 of User zuul. Dec 5 02:56:59 localhost python3[35513]: ansible-ansible.legacy.ping Invoked with data=pong Dec 5 02:57:00 localhost python3[35558]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 02:57:00 localhost python3[35578]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546419.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 5 02:57:01 localhost python3[35634]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:57:01 localhost python3[35677]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764921421.08622-66668-244311890115529/source _original_basename=tmpusl7b79z follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:02 localhost python3[35707]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:02 localhost python3[35723]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:02 localhost python3[35739]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:03 localhost python3[35755]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKhCwauelSZpVrpaTNQxe2R6ec2QHORJghuWfxtQwzHg2x+oKXSIixkFMvmPr/8br5a/wDlb+3cvVElj8NB5xDJa0rLqq8KwgntyGbCnt/R4NPyeOJmzt6OTCBUt1Wc+SrSZqnsocv4LKgzyAEoVtrPa1hLjVRUboY3acFZrtKr5vmJHrvSUWOMgAkNigNgqd86yGCHoF5/bcNFWvgwF2jHOlOQ4TsEg6WtLmyTSDYbAHWK8r5pLuR0/zNZmo5dKCmJMlrc/pM9okyKjxJq/Kxlr5UE94IrAW6XX6NnKjqmSox5EcIEnA+ZRRajO96Q+i0gHHOO1CMJi0hzlLFa4rpqFpOV1YkLneZkwv/pLAvhO6p6DmWmBVdUX5rme2hZJtkiB8MMPNo6zk1TG8CeNZKa/+h/JaaxhN7COwJc0CFMl2Ayd5HvCqrIaa59h5WClxDfFQHok0r9zwEcqsdlrXj1UrVZOYHWZqIjZQMbgpyMiucGln49lg969bseHvcE+U= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:04 localhost python3[35769]: ansible-ping Invoked with data=pong Dec 5 02:57:15 localhost sshd[35770]: main: sshd: ssh-rsa algorithm is disabled Dec 5 02:57:15 localhost systemd-logind[760]: New session 28 of user tripleo-admin. Dec 5 02:57:15 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 5 02:57:15 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 5 02:57:15 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 5 02:57:15 localhost systemd[1]: Starting User Manager for UID 1003... Dec 5 02:57:15 localhost systemd[35774]: Queued start job for default target Main User Target. Dec 5 02:57:15 localhost systemd[35774]: Created slice User Application Slice. Dec 5 02:57:15 localhost systemd[35774]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 02:57:15 localhost systemd[35774]: Started Daily Cleanup of User's Temporary Directories. Dec 5 02:57:15 localhost systemd[35774]: Reached target Paths. Dec 5 02:57:15 localhost systemd[35774]: Reached target Timers. Dec 5 02:57:15 localhost systemd[35774]: Starting D-Bus User Message Bus Socket... Dec 5 02:57:15 localhost systemd[35774]: Starting Create User's Volatile Files and Directories... Dec 5 02:57:15 localhost systemd[35774]: Listening on D-Bus User Message Bus Socket. Dec 5 02:57:15 localhost systemd[35774]: Reached target Sockets. Dec 5 02:57:15 localhost systemd[35774]: Finished Create User's Volatile Files and Directories. Dec 5 02:57:15 localhost systemd[35774]: Reached target Basic System. Dec 5 02:57:15 localhost systemd[35774]: Reached target Main User Target. Dec 5 02:57:15 localhost systemd[35774]: Startup finished in 126ms. Dec 5 02:57:15 localhost systemd[1]: Started User Manager for UID 1003. Dec 5 02:57:15 localhost systemd[1]: Started Session 28 of User tripleo-admin. Dec 5 02:57:16 localhost python3[35834]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 5 02:57:21 localhost python3[35854]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Dec 5 02:57:22 localhost python3[35870]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 5 02:57:22 localhost python3[35918]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.t50k1pk3tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:23 localhost python3[35948]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.t50k1pk3tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:24 localhost python3[35964]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.t50k1pk3tmphosts insertbefore=BOF block=172.17.0.106 np0005546419.localdomain np0005546419#012172.18.0.106 np0005546419.storage.localdomain np0005546419.storage#012172.20.0.106 np0005546419.storagemgmt.localdomain np0005546419.storagemgmt#012172.17.0.106 np0005546419.internalapi.localdomain np0005546419.internalapi#012172.19.0.106 np0005546419.tenant.localdomain np0005546419.tenant#012192.168.122.106 np0005546419.ctlplane.localdomain np0005546419.ctlplane#012172.17.0.107 np0005546420.localdomain np0005546420#012172.18.0.107 np0005546420.storage.localdomain np0005546420.storage#012172.20.0.107 np0005546420.storagemgmt.localdomain np0005546420.storagemgmt#012172.17.0.107 np0005546420.internalapi.localdomain np0005546420.internalapi#012172.19.0.107 np0005546420.tenant.localdomain np0005546420.tenant#012192.168.122.107 np0005546420.ctlplane.localdomain np0005546420.ctlplane#012172.17.0.108 np0005546421.localdomain np0005546421#012172.18.0.108 np0005546421.storage.localdomain np0005546421.storage#012172.20.0.108 np0005546421.storagemgmt.localdomain np0005546421.storagemgmt#012172.17.0.108 np0005546421.internalapi.localdomain np0005546421.internalapi#012172.19.0.108 np0005546421.tenant.localdomain np0005546421.tenant#012192.168.122.108 np0005546421.ctlplane.localdomain np0005546421.ctlplane#012172.17.0.103 np0005546415.localdomain np0005546415#012172.18.0.103 np0005546415.storage.localdomain np0005546415.storage#012172.20.0.103 np0005546415.storagemgmt.localdomain np0005546415.storagemgmt#012172.17.0.103 np0005546415.internalapi.localdomain np0005546415.internalapi#012172.19.0.103 np0005546415.tenant.localdomain np0005546415.tenant#012192.168.122.103 np0005546415.ctlplane.localdomain np0005546415.ctlplane#012172.17.0.104 np0005546416.localdomain np0005546416#012172.18.0.104 np0005546416.storage.localdomain np0005546416.storage#012172.20.0.104 np0005546416.storagemgmt.localdomain np0005546416.storagemgmt#012172.17.0.104 np0005546416.internalapi.localdomain np0005546416.internalapi#012172.19.0.104 np0005546416.tenant.localdomain np0005546416.tenant#012192.168.122.104 np0005546416.ctlplane.localdomain np0005546416.ctlplane#012172.17.0.105 np0005546418.localdomain np0005546418#012172.18.0.105 np0005546418.storage.localdomain np0005546418.storage#012172.20.0.105 np0005546418.storagemgmt.localdomain np0005546418.storagemgmt#012172.17.0.105 np0005546418.internalapi.localdomain np0005546418.internalapi#012172.19.0.105 np0005546418.tenant.localdomain np0005546418.tenant#012192.168.122.105 np0005546418.ctlplane.localdomain np0005546418.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.163 overcloud.storage.localdomain#012172.20.0.242 overcloud.storagemgmt.localdomain#012172.17.0.216 overcloud.internalapi.localdomain#012172.21.0.170 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:24 localhost python3[35980]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.t50k1pk3tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:57:25 localhost python3[35997]: ansible-file Invoked with path=/tmp/ansible.t50k1pk3tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:57:26 localhost python3[36013]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:57:26 localhost python3[36030]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:57:31 localhost python3[36049]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:57:32 localhost python3[36066]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:58:42 localhost kernel: SELinux: Converting 2700 SID table entries... Dec 5 02:58:42 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 02:58:42 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 02:58:42 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 02:58:42 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 02:58:42 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 02:58:42 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 02:58:42 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 02:58:43 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=6 res=1 Dec 5 02:58:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:58:43 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 02:58:43 localhost systemd[1]: Reloading. Dec 5 02:58:43 localhost systemd-rc-local-generator[36980]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:58:43 localhost systemd-sysv-generator[36985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:58:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:58:43 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 02:58:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 02:58:44 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 02:58:44 localhost systemd[1]: run-r1f538504565c4146bd85697786b397ed.service: Deactivated successfully. Dec 5 02:58:48 localhost python3[37431]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:58:50 localhost python3[37570]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 02:58:50 localhost systemd[1]: Reloading. Dec 5 02:58:50 localhost systemd-rc-local-generator[37599]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:58:50 localhost systemd-sysv-generator[37604]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:58:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:58:50 localhost python3[37624]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:58:51 localhost python3[37640]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:58:52 localhost python3[37657]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 5 02:58:52 localhost python3[37675]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:58:53 localhost python3[37693]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:58:53 localhost python3[37711]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 02:58:53 localhost systemd[1]: Reloading Network Manager... Dec 5 02:58:53 localhost NetworkManager[5960]: [1764921533.7928] audit: op="reload" arg="0" pid=37714 uid=0 result="success" Dec 5 02:58:53 localhost NetworkManager[5960]: [1764921533.7937] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Dec 5 02:58:53 localhost NetworkManager[5960]: [1764921533.7938] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Dec 5 02:58:53 localhost systemd[1]: Reloaded Network Manager. Dec 5 02:58:54 localhost python3[37730]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:58:54 localhost python3[37747]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:58:55 localhost python3[37765]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:58:55 localhost python3[37781]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:58:56 localhost python3[37797]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 5 02:58:56 localhost python3[37813]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:58:57 localhost python3[37829]: ansible-blockinfile Invoked with path=/tmp/ansible.zdv3igw4 block=[192.168.122.106]*,[np0005546419.ctlplane.localdomain]*,[172.17.0.106]*,[np0005546419.internalapi.localdomain]*,[172.18.0.106]*,[np0005546419.storage.localdomain]*,[172.20.0.106]*,[np0005546419.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005546419.tenant.localdomain]*,[np0005546419.localdomain]*,[np0005546419]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEuRXji6XnIqsABVq0Qqof5KS4SAvlk4RgdtizNBr7m3ROTYSahE5AJNLCTMugmJtGXewQjNvC8Gcrwjha423XMFi1NpQBCu/U72HR15GJ4x0DRTlvDzeuyqmAuTQEBnQcjNlSIQ4FOJnMjeI6JzpCzCvQ8kOvkGMj6A3Hg/syH7t97g6vL8Cua473lHIav6GTZkm3SmFKQ3Xwj9z3cxUxUnrSgES1zowNRjtoEtPZjSgoF5b8nFIjaQf2ZwMcV0lopTVTvmRVyYDvsR8wFpqMebvWZkW7NQNAaUhRwiYfvQM5/uX1R294FSkW4UiMA5xWT6BMUvtJzexoxZwmrJN3E8I5NLL2KsN33G/6CHA5roanPqECSsRgwyhgQ8bARZgymqoTR9u/p8RRwj7J+x+qJCKMrG+inICVI/o3oOAD2Kdc2rFHXCzdC7sNhjF9/0HPZy8Dt2phAaMcAs4ueBT1Qv/WP22vx3lBguSxEC09rfl+zsp5KAd3jOr9hJBn34E=#012[192.168.122.107]*,[np0005546420.ctlplane.localdomain]*,[172.17.0.107]*,[np0005546420.internalapi.localdomain]*,[172.18.0.107]*,[np0005546420.storage.localdomain]*,[172.20.0.107]*,[np0005546420.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005546420.tenant.localdomain]*,[np0005546420.localdomain]*,[np0005546420]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI9Y4hTQymJJTi7lwfGVKCetJ5Q4auPNryuYcUqXhqNAkgJUht3nxbV0LL2zw4tBsorx+hqOtHy6QfyMWc4r5hOjGRUOhC2uarhQho1134qkdAt7Wd1XMZFeslg1Vk7F8G5TciLUUJBsqvfKAsGc9/SQS5rWRQ90ssw6RtnrhuCDasOzJIdPA2tYjLQ2emSbjgfd1OuXSpKpSkko9b1cwE6trMzU8G7508xssCoDz66P8kF4Kf+OGT8iWzM8xKE0cB8b50ltkwnrxsK5Hwc8zz9LoLSU01AS9CNm299lqjPgZZhTOu6zSXvN6p4+CylbKvJO19AnMSzMEJZEPoHNCQ2SM+/LxQ9rIH8MAVrpw9SUndYbtXTvUkEsZRYAkH64dyfn+9kcYTPaf/oqkrvxuc6Nlk/uZ79dbjW0Vc+/XJXX9F7hLsdu3PK0kt4oBXIG9B9jdKXVobNiH7lsArspEnZ13zzspPyojH0UV6v0AfaZgCMP8b7Erg9y9+HPradoE=#012[192.168.122.108]*,[np0005546421.ctlplane.localdomain]*,[172.17.0.108]*,[np0005546421.internalapi.localdomain]*,[172.18.0.108]*,[np0005546421.storage.localdomain]*,[172.20.0.108]*,[np0005546421.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005546421.tenant.localdomain]*,[np0005546421.localdomain]*,[np0005546421]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOnO4FOEzvhvnfZvvg9C7oar+ml2He45IxleHN54kwSVAvs2ltf36WvXeS2XAi7WgRxM+SZhG+GxbHWO/u3KqZQXbOWufPkzZF3oMisaK3ZDVZLqKvlrQZf2+29fCEYI9L5zPC/HNP6jqIyDlBSXGYPLQgUjpxxieUICaQ0fIp4WhlqviONuO0ZTwWQdPf5CYPALkVZ74wN1aGPulFSaGYretHzLaUvZvZQVL4q4PRI+7YpxvT1NyDOyTvw5u8TpzZXKp67nFfFtlbX8BvY9f1FVlgzcPwQvxzYWeJy5j9Cv0xoJ56dXmUueau39rhB/CBpKfhymLq91H1nh+F175gPPt5KZA5cfZg7fWlshSRjozK3Z53WpNGrpQtCIjhxblJ5Z3mxAPGcyYYOXoG/iv/IDwMvhkswL2Cqb6/ww6osSP2EJQIjWsS+CoYjynw+g7e++29qN8QiRLOqOuges85TiZ2vxP5lkvs8V3oAF+k4OsPOGPKzibXNDl5PyGwhVU=#012[192.168.122.103]*,[np0005546415.ctlplane.localdomain]*,[172.17.0.103]*,[np0005546415.internalapi.localdomain]*,[172.18.0.103]*,[np0005546415.storage.localdomain]*,[172.20.0.103]*,[np0005546415.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005546415.tenant.localdomain]*,[np0005546415.localdomain]*,[np0005546415]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkD6dMrlstq/08/i19MSGJhEADExfxigVjJJQ88FcvZHbzGOgQVpolfx1koKTyWN+Arobw6wFmJvZLTo8Bb6WoVTK7S5Ea1OnfJHT61JMRl/WjdLjR5dZtwV62H6dAQuwXtLXjjbx/PIaHGhjGeQ3mAmwEgTU06ey152S+ChTCN3ft7vCFw4DHXAly+guOSgi5JGOb3gMATYrMGVu90ONPr0mfPn6T6oBZQPEWvdKFCulrlj9zVZu7HsSSRQFMxH7KgZJzpkLllA4WVfnGbj38AXD2k/HkyLfYzY27ZsoOL1HyT4ardSL2aUb55JnBNuOxkTcFwxKYlyCL/gWk20rx9nJe7mp5Rl6iK4a8UA5SEKO0sudwL9uZ4JEMNAAViZ+5xpl7M0+YowEMffNSUrVJ8/SSa0beqOu9JTnZ+cEwNCNkJJM/h8ajcjEaAHeRXDkTkujntrvNR6KskZa+g94xtpw1nrG6xl0yzppj6k4nsmcRGGlicsbZEc9SZOW+qaM=#012[192.168.122.104]*,[np0005546416.ctlplane.localdomain]*,[172.17.0.104]*,[np0005546416.internalapi.localdomain]*,[172.18.0.104]*,[np0005546416.storage.localdomain]*,[172.20.0.104]*,[np0005546416.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005546416.tenant.localdomain]*,[np0005546416.localdomain]*,[np0005546416]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBknSonWn2K7oVrigtLeGeXWlaMY1uJqi1743zO2mguB8ceS1WtlyZavpdSnzpqiGiIwguuYuBxNKWaZMI/9XZyZKspYWl5eArdwgxtnKFyHWmHop7/MeX+Y+J7CrfiQ8MajXX1sy1YpxunvdWo7DK3K9DJfTaJ6onr8amsw55w0Pf5HOW0UBGE+AqFmTy/5btxUh4cKFDwRjGeJJps2YFr/p9mdITdZy6sxC+0QCi9XHI7FrpRbYfK0zSSrOBpixOr0sahUWL/3ZUVF5uiJbGTaihxnFrAN3SqoJsWJNJADqmp+E0K3oSw2xsGEvRz02E5n3+GqaYejfpUMdLjvSmTfEKVqlMiL8M0AtBvfeP7KlZCpABiuvopbKIXNsjFfG1HXkFrFHbCgRsfmg7e+8ThU6J66lb2cJhHrtKuP+uePggolCX4bqdv8abdxV9keT+DJCOZ6iMJnDTI8ggTwMTBVwykvMZXIhwiJruh8oACUYaubPkkGSz4VhPIqfSch0=#012[192.168.122.105]*,[np0005546418.ctlplane.localdomain]*,[172.17.0.105]*,[np0005546418.internalapi.localdomain]*,[172.18.0.105]*,[np0005546418.storage.localdomain]*,[172.20.0.105]*,[np0005546418.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005546418.tenant.localdomain]*,[np0005546418.localdomain]*,[np0005546418]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD9S5Z7rzST5j/fEC81CBzjbVnN/b1iPQZ35oKFbDVSZ3xrScwTjVDnymCRMpkG7ZjaGvyyMSy6sRwzcBVzWZGF94EKpFeYMdUdfpsK2dbevK8wHAAm7cfqUZ5sgTKGF4TOZZ08RJZ9Xc1fGGKeE0bg2QCqoKA7YzWR++lzm/LXf8DTXUhBN+xvwQ3rVN4Y8AIlXB2YS/FAkc2s3u95spaTjW0hbNonz/q6QiuuElDTfezQ9IkzHyYOFqIxYRnttkUuXTp5FodFYAlU3VOLHCoI6tZQk2f1Kt1ZZX4Umqd2RA4zu0IBbblyns+2Jy/Jg5MuKEZSC5X2xQ/tUeClu2+ZHxwKRMxnwAgAiYuC5ryGQuyc0vphUN3uE6JIxKd+8YgAscYSYvc7VoWqodvvt8eIxoXCDh1XbIsKKbWqosjwoNWAoNZUh+LcHIDskM+7FNALGudbtKgKRazoMRvGbZPWQr8FB2eTWiqo2TOBwHArzAXZmnKcg+ad9eMQtW6PX0M=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:58:58 localhost python3[37845]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.zdv3igw4' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:58:58 localhost python3[37863]: ansible-file Invoked with path=/tmp/ansible.zdv3igw4 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:58:59 localhost python3[37879]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 02:58:59 localhost python3[37895]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:00 localhost python3[37913]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:00 localhost python3[37932]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Dec 5 02:59:03 localhost python3[38069]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:03 localhost python3[38086]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 02:59:06 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 02:59:06 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 02:59:06 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:59:06 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 02:59:06 localhost systemd[1]: Reloading. Dec 5 02:59:06 localhost systemd-rc-local-generator[38148]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 02:59:07 localhost systemd-sysv-generator[38152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 02:59:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 02:59:07 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 02:59:07 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 5 02:59:07 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 5 02:59:07 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 5 02:59:07 localhost systemd[1]: tuned.service: Consumed 1.836s CPU time. Dec 5 02:59:07 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 5 02:59:07 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 02:59:07 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 02:59:07 localhost systemd[1]: run-r9690b843d59647fc8f074b28ad6cc395.service: Deactivated successfully. Dec 5 02:59:08 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 5 02:59:08 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 02:59:08 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 02:59:09 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 02:59:09 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 02:59:09 localhost systemd[1]: run-r604f305d07654311b01564b0b8b21f12.service: Deactivated successfully. Dec 5 02:59:09 localhost python3[38523]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 02:59:10 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 5 02:59:10 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 5 02:59:10 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 5 02:59:10 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 5 02:59:11 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 5 02:59:11 localhost python3[38719]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:12 localhost python3[38736]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 5 02:59:13 localhost python3[38752]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:59:13 localhost python3[38768]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:15 localhost python3[38788]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:15 localhost python3[38805]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:59:17 localhost python3[38821]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:23 localhost python3[38837]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:24 localhost python3[38885]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:24 localhost python3[38930]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921563.709564-71269-61545835150706/source _original_basename=tmpv9nwsl0c follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:24 localhost python3[38960]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:25 localhost python3[39008]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:26 localhost python3[39051]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921565.3229713-71542-228284337028226/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=ba6566c0e663d54e816d0362a53167cb9d04e50e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:26 localhost python3[39113]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:27 localhost python3[39156]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921566.265616-71619-211866200705117/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=df55714b8da9bfb2aa67dcba305bac259217ffd4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:27 localhost python3[39218]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:27 localhost python3[39261]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921567.143017-71619-57252317216090/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=4939522ff3b438a7e269b4a6e22ebcad88445c95 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:28 localhost python3[39323]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:28 localhost python3[39366]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921568.0499344-71619-156450926104938/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=68b5a56a66cb10764ef3288009ad5e9b7e8faf12 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:29 localhost systemd[35774]: Starting Mark boot as successful... Dec 5 02:59:29 localhost systemd[35774]: Finished Mark boot as successful. Dec 5 02:59:29 localhost python3[39429]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:29 localhost python3[39472]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921569.1094618-71619-258715565400165/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:30 localhost python3[39534]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:30 localhost python3[39577]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921570.0085688-71619-67649811309581/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=db7ac230ac9709085ae03714d1b204678324185f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:31 localhost python3[39639]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:31 localhost python3[39682]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921570.8861177-71619-58774171908721/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:32 localhost python3[39744]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:32 localhost python3[39787]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921571.785247-71619-24534442164490/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=85195c4ca9b4af9c68130dd7e72cbd842702429f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:32 localhost python3[39849]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:33 localhost python3[39892]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921572.6554377-71619-18674111985741/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:34 localhost python3[39954]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:34 localhost python3[39997]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921573.7748969-71619-162075649920426/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:34 localhost python3[40059]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:35 localhost python3[40102]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921574.5639362-71619-174230501002242/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=1cd227d2929c9294a7f9b08e3887c6c83e93dec9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:35 localhost python3[40132]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 02:59:36 localhost python3[40180]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 02:59:36 localhost python3[40223]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921576.2398334-72225-2868970774223/source _original_basename=tmpqmmcs9w6 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 02:59:41 localhost python3[40330]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 02:59:41 localhost python3[40391]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:46 localhost python3[40408]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:51 localhost python3[40425]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:51 localhost python3[40448]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:52 localhost python3[40471]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:53 localhost python3[40494]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 02:59:53 localhost python3[40517]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:00:34 localhost python3[40540]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:35 localhost python3[40588]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:35 localhost python3[40606]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpuj7vpr0n recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:35 localhost python3[40636]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:36 localhost python3[40684]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:36 localhost python3[40702]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:37 localhost python3[40764]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:37 localhost python3[40782]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:37 localhost python3[40844]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:38 localhost python3[40862]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:38 localhost python3[40924]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:39 localhost python3[40942]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:39 localhost python3[41004]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:39 localhost python3[41022]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:40 localhost python3[41084]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:40 localhost python3[41115]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:40 localhost python3[41205]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:41 localhost python3[41232]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:41 localhost python3[41306]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:41 localhost python3[41324]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:42 localhost python3[41386]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:42 localhost python3[41404]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:42 localhost python3[41466]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:43 localhost python3[41484]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:43 localhost python3[41561]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:43 localhost python3[41579]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:44 localhost python3[41609]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:00:45 localhost python3[41657]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:45 localhost python3[41675]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpqewd2lum recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:48 localhost python3[41705]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:00:52 localhost python3[41722]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:00:53 localhost python3[41740]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:00:53 localhost python3[41758]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:00:53 localhost systemd[1]: Reloading. Dec 5 03:00:53 localhost systemd-rc-local-generator[41787]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:00:53 localhost systemd-sysv-generator[41790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:00:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:00:54 localhost systemd[1]: Starting Netfilter Tables... Dec 5 03:00:54 localhost systemd[1]: Finished Netfilter Tables. Dec 5 03:00:54 localhost python3[41848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:55 localhost python3[41891]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921654.4412127-74863-102752808094230/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:55 localhost python3[41921]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:00:56 localhost python3[41939]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:00:56 localhost python3[41988]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:56 localhost python3[42031]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921656.1919208-75057-9063562275393/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:57 localhost python3[42093]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:57 localhost python3[42136]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921657.0829492-75131-96360966707022/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:58 localhost python3[42198]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:58 localhost python3[42241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921657.9641757-75235-229696010697378/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:00:59 localhost python3[42303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:00:59 localhost python3[42346]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921658.7837937-75445-7356137582382/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:00 localhost python3[42408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:00 localhost python3[42451]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921659.60398-75495-127827851902462/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:01 localhost python3[42481]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:01 localhost python3[42559]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:02 localhost python3[42576]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:02 localhost python3[42593]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:02 localhost python3[42612]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:03 localhost python3[42628]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:03 localhost python3[42644]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:04 localhost python3[42660]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 5 03:01:05 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=7 res=1 Dec 5 03:01:05 localhost python3[42680]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 5 03:01:06 localhost kernel: SELinux: Converting 2704 SID table entries... Dec 5 03:01:06 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 03:01:06 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 03:01:06 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 03:01:06 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 03:01:06 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 03:01:06 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 03:01:06 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 03:01:06 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=8 res=1 Dec 5 03:01:06 localhost python3[42701]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 5 03:01:07 localhost kernel: SELinux: Converting 2704 SID table entries... Dec 5 03:01:07 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 03:01:07 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 03:01:07 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 03:01:07 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 03:01:07 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 03:01:07 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 03:01:07 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 03:01:07 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=9 res=1 Dec 5 03:01:07 localhost python3[42722]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 5 03:01:08 localhost kernel: SELinux: Converting 2704 SID table entries... Dec 5 03:01:08 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 03:01:08 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 03:01:08 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 03:01:08 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 03:01:08 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 03:01:08 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 03:01:08 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 03:01:08 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=10 res=1 Dec 5 03:01:09 localhost python3[42744]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:09 localhost python3[42760]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:09 localhost python3[42776]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:09 localhost python3[42792]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:01:10 localhost python3[42808]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:11 localhost python3[42825]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:01:14 localhost python3[42842]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:15 localhost python3[42890]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:15 localhost python3[42933]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921674.8806038-76302-18863614822318/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:15 localhost python3[42963]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 03:01:15 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 5 03:01:15 localhost systemd[1]: Stopped Load Kernel Modules. Dec 5 03:01:15 localhost systemd[1]: Stopping Load Kernel Modules... Dec 5 03:01:15 localhost systemd[1]: Starting Load Kernel Modules... Dec 5 03:01:16 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 5 03:01:16 localhost kernel: Bridge firewalling registered Dec 5 03:01:16 localhost systemd-modules-load[42966]: Inserted module 'br_netfilter' Dec 5 03:01:16 localhost systemd-modules-load[42966]: Module 'msr' is built in Dec 5 03:01:16 localhost systemd[1]: Finished Load Kernel Modules. Dec 5 03:01:16 localhost python3[43018]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:16 localhost python3[43061]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921676.258419-76370-258415676699959/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:17 localhost python3[43091]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:17 localhost python3[43108]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:18 localhost python3[43126]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:18 localhost python3[43144]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:18 localhost python3[43161]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:20 localhost python3[43178]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:20 localhost python3[43195]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:20 localhost python3[43213]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:22 localhost python3[43231]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:22 localhost python3[43249]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:22 localhost python3[43267]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:23 localhost python3[43285]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:23 localhost python3[43303]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:23 localhost python3[43321]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:23 localhost python3[43338]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:24 localhost python3[43355]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:24 localhost python3[43372]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:24 localhost python3[43389]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 5 03:01:25 localhost python3[43407]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 03:01:25 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 5 03:01:25 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 5 03:01:25 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 5 03:01:25 localhost systemd[1]: Starting Apply Kernel Variables... Dec 5 03:01:25 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 5 03:01:25 localhost systemd[1]: Finished Apply Kernel Variables. Dec 5 03:01:25 localhost python3[43427]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:25 localhost python3[43443]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:26 localhost python3[43459]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:26 localhost python3[43475]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:01:26 localhost python3[43491]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:27 localhost python3[43507]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:27 localhost python3[43523]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:27 localhost python3[43539]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:28 localhost python3[43555]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:28 localhost python3[43603]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:28 localhost python3[43646]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921688.1844397-76798-92708513889524/source _original_basename=tmp0_0p242v follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:29 localhost python3[43676]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:30 localhost python3[43693]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:31 localhost python3[43741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:31 localhost python3[43784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921690.8033965-77143-72447109250530/source _original_basename=tmpgt66ya8z follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:31 localhost python3[43814]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:32 localhost python3[43830]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:32 localhost python3[43846]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:32 localhost python3[43862]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:33 localhost python3[43878]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:33 localhost python3[43894]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:33 localhost python3[43910]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:34 localhost python3[43926]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:34 localhost python3[43942]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:35 localhost python3[43958]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Dec 5 03:01:35 localhost python3[43980]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546419.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 5 03:01:36 localhost python3[44004]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Dec 5 03:01:36 localhost python3[44020]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:36 localhost python3[44069]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:37 localhost python3[44112]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921696.5902987-77411-250718387012738/source _original_basename=tmpxm28rr21 follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:37 localhost python3[44142]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 5 03:01:38 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=11 res=1 Dec 5 03:01:38 localhost python3[44163]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:38 localhost python3[44179]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:39 localhost python3[44195]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Dec 5 03:01:40 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=12 res=1 Dec 5 03:01:40 localhost python3[44216]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:01:44 localhost python3[44265]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 03:01:44 localhost python3[44406]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:45 localhost python3[44422]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:45 localhost python3[44498]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:46 localhost python3[44541]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921705.2446113-77768-128923807007801/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=547807e6e01362cdfff02b2a143fab535a8023c6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:46 localhost python3[44603]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:47 localhost python3[44648]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921706.2148242-77813-239544250964801/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:47 localhost python3[44678]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:47 localhost python3[44694]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:48 localhost python3[44710]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:48 localhost python3[44726]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:49 localhost python3[44774]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:49 localhost python3[44817]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921708.8555021-77938-139796212219402/source _original_basename=tmpj1p3tamh follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:49 localhost python3[44847]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:50 localhost python3[44863]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:01:50 localhost python3[44879]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:01:54 localhost python3[44928]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:01:54 localhost python3[44973]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921713.8900423-78123-147361948941347/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:01:55 localhost python3[45004]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:01:55 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 5 03:01:55 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 5 03:01:55 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 5 03:01:55 localhost systemd[1]: sshd.service: Consumed 2.325s CPU time, read 2.1M from disk, written 16.0K to disk. Dec 5 03:01:55 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 5 03:01:55 localhost systemd[1]: Stopping sshd-keygen.target... Dec 5 03:01:55 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 03:01:55 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 03:01:55 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 03:01:55 localhost systemd[1]: Reached target sshd-keygen.target. Dec 5 03:01:55 localhost systemd[1]: Starting OpenSSH server daemon... Dec 5 03:01:55 localhost sshd[45008]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:01:55 localhost systemd[1]: Started OpenSSH server daemon. Dec 5 03:01:55 localhost python3[45024]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:56 localhost python3[45042]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:01:56 localhost python3[45060]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:02:00 localhost python3[45109]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:02:00 localhost python3[45127]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:01 localhost python3[45157]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:02:02 localhost python3[45207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:02:02 localhost python3[45225]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:02 localhost python3[45255]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:02:02 localhost systemd[1]: Reloading. Dec 5 03:02:03 localhost systemd-rc-local-generator[45276]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:02:03 localhost systemd-sysv-generator[45280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:02:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:02:03 localhost systemd[1]: Starting chronyd online sources service... Dec 5 03:02:03 localhost chronyc[45295]: 200 OK Dec 5 03:02:03 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 5 03:02:03 localhost systemd[1]: Finished chronyd online sources service. Dec 5 03:02:03 localhost python3[45311]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:03 localhost chronyd[25851]: System clock was stepped by 0.000036 seconds Dec 5 03:02:04 localhost python3[45328]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:04 localhost python3[45345]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:04 localhost chronyd[25851]: System clock was stepped by 0.000000 seconds Dec 5 03:02:04 localhost python3[45362]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:02:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 14.69 MB, 0.02 MB/s#012Interval WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 5 03:02:05 localhost python3[45379]: ansible-timezone Invoked with name=UTC hwclock=None Dec 5 03:02:05 localhost systemd[1]: Starting Time & Date Service... Dec 5 03:02:05 localhost systemd[1]: Started Time & Date Service. Dec 5 03:02:06 localhost python3[45399]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:07 localhost python3[45416]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:07 localhost python3[45433]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 5 03:02:08 localhost python3[45449]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:02:08 localhost python3[45465]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:02:09 localhost python3[45481]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:02:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:02:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 3395 writes, 16K keys, 3395 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3395 writes, 201 syncs, 16.89 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3395 writes, 16K keys, 3395 commit groups, 1.0 writes per commit group, ingest: 15.31 MB, 0.03 MB/s#012Interval WAL: 3395 writes, 201 syncs, 16.89 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 5 03:02:09 localhost python3[45529]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:02:09 localhost python3[45572]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921729.148376-79233-261464402185023/source _original_basename=tmp2o9tubjl follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:10 localhost python3[45634]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:02:10 localhost python3[45677]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921729.90523-79283-62493134224762/source _original_basename=tmp8cjur2zd follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:11 localhost python3[45707]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 5 03:02:11 localhost systemd[1]: Reloading. Dec 5 03:02:11 localhost systemd-sysv-generator[45735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:02:11 localhost systemd-rc-local-generator[45730]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:02:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:02:11 localhost python3[45761]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:02:12 localhost python3[45777]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:12 localhost python3[45794]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:12 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Dec 5 03:02:12 localhost python3[45811]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:02:13 localhost python3[45827]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:13 localhost python3[45875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:02:13 localhost python3[45918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921733.288213-79535-22638548920452/source _original_basename=tmpiqm4ynn5 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:29 localhost systemd[35774]: Created slice User Background Tasks Slice. Dec 5 03:02:29 localhost systemd[35774]: Starting Cleanup of User's Temporary Files and Directories... Dec 5 03:02:29 localhost systemd[35774]: Finished Cleanup of User's Temporary Files and Directories. Dec 5 03:02:35 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 5 03:02:40 localhost python3[45951]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:02:41 localhost python3[45967]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Dec 5 03:02:41 localhost python3[45983]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:02:42 localhost python3[45999]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:42 localhost python3[46015]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:42 localhost python3[46031]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 5 03:02:43 localhost kernel: SELinux: Converting 2707 SID table entries... Dec 5 03:02:43 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 03:02:43 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 03:02:43 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 03:02:43 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 03:02:43 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 03:02:43 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 03:02:43 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 03:02:43 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=13 res=1 Dec 5 03:02:44 localhost python3[46053]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:02:45 localhost python3[46220]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Dec 5 03:02:46 localhost python3[46252]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:02:46 localhost rsyslogd[758]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Dec 5 03:02:46 localhost python3[46285]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:02:46 localhost python3[46301]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Dec 5 03:02:51 localhost python3[46364]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:02:52 localhost python3[46407]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921771.427256-81087-232479998373941/source _original_basename=tmpwv_ye2kc follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:02:52 localhost python3[46437]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:02:54 localhost python3[46560]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:02:56 localhost python3[46681]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 5 03:02:58 localhost python3[46697]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:02:59 localhost python3[46714]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:03:03 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 03:03:03 localhost dbus-broker-launch[18436]: Noticed file-system modification, trigger reload. Dec 5 03:03:03 localhost dbus-broker-launch[18436]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 5 03:03:03 localhost dbus-broker-launch[18436]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 5 03:03:03 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 03:03:03 localhost systemd[1]: Reexecuting. Dec 5 03:03:03 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 5 03:03:03 localhost systemd[1]: Detected virtualization kvm. Dec 5 03:03:03 localhost systemd[1]: Detected architecture x86-64. Dec 5 03:03:03 localhost systemd-rc-local-generator[46770]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:03:03 localhost systemd-sysv-generator[46773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:03:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:03:12 localhost kernel: SELinux: Converting 2707 SID table entries... Dec 5 03:03:12 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 03:03:12 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 03:03:12 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 03:03:12 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 03:03:12 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 03:03:12 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 03:03:12 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 03:03:12 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 03:03:12 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=14 res=1 Dec 5 03:03:12 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 03:03:13 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 03:03:13 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 03:03:13 localhost systemd[1]: Reloading. Dec 5 03:03:13 localhost systemd-sysv-generator[46896]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:03:13 localhost systemd-rc-local-generator[46890]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:03:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:03:13 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 03:03:13 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 03:03:13 localhost systemd-journald[618]: Journal stopped Dec 5 03:03:13 localhost systemd-journald[618]: Received SIGTERM from PID 1 (systemd). Dec 5 03:03:13 localhost systemd[1]: Stopping Journal Service... Dec 5 03:03:13 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 5 03:03:13 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 5 03:03:13 localhost systemd[1]: Stopped Journal Service. Dec 5 03:03:13 localhost systemd[1]: systemd-journald.service: Consumed 1.770s CPU time. Dec 5 03:03:13 localhost systemd[1]: Starting Journal Service... Dec 5 03:03:13 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 5 03:03:13 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 5 03:03:13 localhost systemd[1]: systemd-udevd.service: Consumed 2.965s CPU time. Dec 5 03:03:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 5 03:03:13 localhost systemd-journald[47252]: Journal started Dec 5 03:03:13 localhost systemd-journald[47252]: Runtime Journal (/run/log/journal/d70e7573f9252a22999953aab4dc4dc5) is 12.1M, max 314.7M, 302.6M free. Dec 5 03:03:13 localhost systemd[1]: Started Journal Service. Dec 5 03:03:13 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 5 03:03:13 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 03:03:13 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 03:03:13 localhost systemd-udevd[47257]: Using default interface naming scheme 'rhel-9.0'. Dec 5 03:03:13 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 03:03:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 5 03:03:14 localhost systemd[1]: Reloading. Dec 5 03:03:14 localhost systemd-rc-local-generator[47780]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:03:14 localhost systemd-sysv-generator[47783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:03:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:03:14 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 03:03:14 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 03:03:14 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 03:03:14 localhost systemd[1]: man-db-cache-update.service: Consumed 1.151s CPU time. Dec 5 03:03:14 localhost systemd[1]: run-rd7f303de61eb4f1fa9a205176051914b.service: Deactivated successfully. Dec 5 03:03:14 localhost systemd[1]: run-rd39cc4344ec342c79d4499cebaa062b5.service: Deactivated successfully. Dec 5 03:03:16 localhost python3[48207]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Dec 5 03:03:16 localhost python3[48226]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:03:17 localhost python3[48244]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:03:17 localhost python3[48244]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Dec 5 03:03:17 localhost python3[48244]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Dec 5 03:03:24 localhost podman[48256]: 2025-12-05 08:03:17.698549302 +0000 UTC m=+0.040489416 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 5 03:03:24 localhost python3[48244]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json Dec 5 03:03:24 localhost python3[48358]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:03:24 localhost python3[48358]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Dec 5 03:03:24 localhost python3[48358]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Dec 5 03:03:31 localhost podman[48372]: 2025-12-05 08:03:24.986148421 +0000 UTC m=+0.031275627 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 5 03:03:31 localhost python3[48358]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json Dec 5 03:03:32 localhost python3[48474]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:03:32 localhost python3[48474]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Dec 5 03:03:32 localhost python3[48474]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Dec 5 03:03:48 localhost podman[48487]: 2025-12-05 08:03:32.098630957 +0000 UTC m=+0.022134625 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:03:48 localhost python3[48474]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json Dec 5 03:03:48 localhost systemd[1]: tmp-crun.UPrabJ.mount: Deactivated successfully. Dec 5 03:03:48 localhost podman[49218]: 2025-12-05 08:03:48.891586394 +0000 UTC m=+0.093117250 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4) Dec 5 03:03:48 localhost python3[49215]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:03:48 localhost python3[49215]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Dec 5 03:03:48 localhost python3[49215]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Dec 5 03:03:49 localhost podman[49218]: 2025-12-05 08:03:48.999141006 +0000 UTC m=+0.200671862 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 03:03:57 localhost sshd[49418]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:03:59 localhost podman[49246]: 2025-12-05 08:03:49.017318185 +0000 UTC m=+0.047975615 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:03:59 localhost python3[49215]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json Dec 5 03:04:00 localhost python3[49453]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:00 localhost python3[49453]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Dec 5 03:04:00 localhost python3[49453]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Dec 5 03:04:03 localhost podman[49466]: 2025-12-05 08:04:00.255546766 +0000 UTC m=+0.032028566 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 5 03:04:03 localhost python3[49453]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json Dec 5 03:04:04 localhost python3[49555]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:04 localhost python3[49555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Dec 5 03:04:04 localhost python3[49555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Dec 5 03:04:13 localhost podman[49567]: 2025-12-05 08:04:04.128384662 +0000 UTC m=+0.040979215 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 5 03:04:13 localhost python3[49555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json Dec 5 03:04:13 localhost python3[49774]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:13 localhost python3[49774]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Dec 5 03:04:14 localhost python3[49774]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Dec 5 03:04:16 localhost podman[49786]: 2025-12-05 08:04:14.087516598 +0000 UTC m=+0.033519942 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 5 03:04:16 localhost python3[49774]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json Dec 5 03:04:16 localhost python3[49863]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:16 localhost python3[49863]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Dec 5 03:04:16 localhost python3[49863]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Dec 5 03:04:19 localhost podman[49876]: 2025-12-05 08:04:16.821485106 +0000 UTC m=+0.043955649 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 5 03:04:19 localhost python3[49863]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json Dec 5 03:04:19 localhost python3[49954]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:19 localhost python3[49954]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Dec 5 03:04:19 localhost python3[49954]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Dec 5 03:04:22 localhost podman[49968]: 2025-12-05 08:04:19.974994146 +0000 UTC m=+0.041783401 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 5 03:04:22 localhost python3[49954]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json Dec 5 03:04:23 localhost python3[50046]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:23 localhost python3[50046]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Dec 5 03:04:23 localhost python3[50046]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Dec 5 03:04:27 localhost podman[50059]: 2025-12-05 08:04:23.403951909 +0000 UTC m=+0.042491093 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 5 03:04:27 localhost python3[50046]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json Dec 5 03:04:27 localhost python3[50149]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 5 03:04:27 localhost python3[50149]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Dec 5 03:04:27 localhost python3[50149]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Dec 5 03:04:29 localhost podman[50162]: 2025-12-05 08:04:27.545657143 +0000 UTC m=+0.042094671 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 5 03:04:29 localhost python3[50149]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json Dec 5 03:04:30 localhost python3[50237]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:04:32 localhost ansible-async_wrapper.py[50409]: Invoked with 18531749463 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921871.3946342-83658-169690142992460/AnsiballZ_command.py _ Dec 5 03:04:32 localhost ansible-async_wrapper.py[50412]: Starting module and watcher Dec 5 03:04:32 localhost ansible-async_wrapper.py[50412]: Start watching 50413 (3600) Dec 5 03:04:32 localhost ansible-async_wrapper.py[50413]: Start module (50413) Dec 5 03:04:32 localhost ansible-async_wrapper.py[50409]: Return async_wrapper task started. Dec 5 03:04:32 localhost python3[50433]: ansible-ansible.legacy.async_status Invoked with jid=18531749463.50409 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:04:35 localhost puppet-user[50417]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:35 localhost puppet-user[50417]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:35 localhost puppet-user[50417]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:35 localhost puppet-user[50417]: (file & line not available) Dec 5 03:04:35 localhost puppet-user[50417]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:35 localhost puppet-user[50417]: (file & line not available) Dec 5 03:04:35 localhost puppet-user[50417]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 5 03:04:35 localhost puppet-user[50417]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 5 03:04:35 localhost puppet-user[50417]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.16 seconds Dec 5 03:04:36 localhost puppet-user[50417]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Dec 5 03:04:36 localhost puppet-user[50417]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Dec 5 03:04:36 localhost puppet-user[50417]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Dec 5 03:04:36 localhost puppet-user[50417]: Notice: Applied catalog in 0.82 seconds Dec 5 03:04:36 localhost puppet-user[50417]: Application: Dec 5 03:04:36 localhost puppet-user[50417]: Initial environment: production Dec 5 03:04:36 localhost puppet-user[50417]: Converged environment: production Dec 5 03:04:36 localhost puppet-user[50417]: Run mode: user Dec 5 03:04:36 localhost puppet-user[50417]: Changes: Dec 5 03:04:36 localhost puppet-user[50417]: Total: 3 Dec 5 03:04:36 localhost puppet-user[50417]: Events: Dec 5 03:04:36 localhost puppet-user[50417]: Success: 3 Dec 5 03:04:36 localhost puppet-user[50417]: Total: 3 Dec 5 03:04:36 localhost puppet-user[50417]: Resources: Dec 5 03:04:36 localhost puppet-user[50417]: Changed: 3 Dec 5 03:04:36 localhost puppet-user[50417]: Out of sync: 3 Dec 5 03:04:36 localhost puppet-user[50417]: Total: 10 Dec 5 03:04:36 localhost puppet-user[50417]: Time: Dec 5 03:04:36 localhost puppet-user[50417]: Schedule: 0.00 Dec 5 03:04:36 localhost puppet-user[50417]: File: 0.00 Dec 5 03:04:36 localhost puppet-user[50417]: Exec: 0.01 Dec 5 03:04:36 localhost puppet-user[50417]: Config retrieval: 0.19 Dec 5 03:04:36 localhost puppet-user[50417]: Augeas: 0.78 Dec 5 03:04:36 localhost puppet-user[50417]: Transaction evaluation: 0.81 Dec 5 03:04:36 localhost puppet-user[50417]: Catalog application: 0.82 Dec 5 03:04:36 localhost puppet-user[50417]: Last run: 1764921876 Dec 5 03:04:36 localhost puppet-user[50417]: Filebucket: 0.00 Dec 5 03:04:36 localhost puppet-user[50417]: Total: 0.82 Dec 5 03:04:36 localhost puppet-user[50417]: Version: Dec 5 03:04:36 localhost puppet-user[50417]: Config: 1764921875 Dec 5 03:04:36 localhost puppet-user[50417]: Puppet: 7.10.0 Dec 5 03:04:36 localhost ansible-async_wrapper.py[50413]: Module complete (50413) Dec 5 03:04:37 localhost ansible-async_wrapper.py[50412]: Done in kid B. Dec 5 03:04:42 localhost python3[50560]: ansible-ansible.legacy.async_status Invoked with jid=18531749463.50409 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:04:43 localhost python3[50576]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:04:43 localhost python3[50592]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:04:44 localhost python3[50640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:04:44 localhost python3[50683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921883.9247892-84113-230808453048732/source _original_basename=tmp0xgzwp19 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:04:44 localhost python3[50713]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:04:46 localhost python3[50816]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 5 03:04:46 localhost python3[50835]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 03:04:47 localhost python3[50851]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005546419 step=1 update_config_hash_only=False Dec 5 03:04:48 localhost python3[50867]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:04:48 localhost python3[50883]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 5 03:04:49 localhost python3[50899]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 03:04:50 localhost python3[50939]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Dec 5 03:04:50 localhost podman[51115]: 2025-12-05 08:04:50.45379101 +0000 UTC m=+0.069967268 container create 484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:49:46Z, container_name=container-puppet-metrics_qdr, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1) Dec 5 03:04:50 localhost systemd[1]: Started libpod-conmon-484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904.scope. Dec 5 03:04:50 localhost podman[51142]: 2025-12-05 08:04:50.490507727 +0000 UTC m=+0.069311489 container create 17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com) Dec 5 03:04:50 localhost systemd[1]: Started libcrun container. Dec 5 03:04:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5aafadc3f6652404db23d10ded19788ad6c73ce23f65e7215c7f5ecc619d5e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:50 localhost podman[51115]: 2025-12-05 08:04:50.412512143 +0000 UTC m=+0.028688441 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 5 03:04:50 localhost podman[51115]: 2025-12-05 08:04:50.515923017 +0000 UTC m=+0.132099285 container init 484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:49:46Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true) Dec 5 03:04:50 localhost podman[51115]: 2025-12-05 08:04:50.52581589 +0000 UTC m=+0.141992188 container start 484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, config_id=tripleo_puppet_step1, container_name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 5 03:04:50 localhost podman[51115]: 2025-12-05 08:04:50.526082918 +0000 UTC m=+0.142259206 container attach 484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, container_name=container-puppet-metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Dec 5 03:04:50 localhost systemd[1]: Started libpod-conmon-17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0.scope. Dec 5 03:04:50 localhost podman[51164]: 2025-12-05 08:04:50.540147041 +0000 UTC m=+0.102292141 container create 087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=container-puppet-collectd, config_id=tripleo_puppet_step1) Dec 5 03:04:50 localhost systemd[1]: Started libcrun container. Dec 5 03:04:50 localhost podman[51159]: 2025-12-05 08:04:50.544693749 +0000 UTC m=+0.111894645 container create 596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:04:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6e2221a6fb4573209f1900bac1c26a012fe88f558913e381fcbd7377aeb8f8/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:50 localhost podman[51145]: 2025-12-05 08:04:50.449645022 +0000 UTC m=+0.024904145 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:04:50 localhost podman[51142]: 2025-12-05 08:04:50.44664247 +0000 UTC m=+0.025446252 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 5 03:04:50 localhost podman[51159]: 2025-12-05 08:04:50.482605934 +0000 UTC m=+0.049806840 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 5 03:04:50 localhost podman[51164]: 2025-12-05 08:04:50.49255357 +0000 UTC m=+0.054698720 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 5 03:04:51 localhost podman[51142]: 2025-12-05 08:04:51.661681421 +0000 UTC m=+1.240485213 container init 17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=container-puppet-crond, distribution-scope=public, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Dec 5 03:04:51 localhost podman[51145]: 2025-12-05 08:04:51.664519768 +0000 UTC m=+1.239778871 container create 6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 5 03:04:51 localhost podman[51142]: 2025-12-05 08:04:51.676715052 +0000 UTC m=+1.255518854 container start 17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, io.buildah.version=1.41.4, vcs-type=git) Dec 5 03:04:51 localhost podman[51142]: 2025-12-05 08:04:51.677012351 +0000 UTC m=+1.255816143 container attach 17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 5 03:04:51 localhost systemd[1]: Started libpod-conmon-596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b.scope. Dec 5 03:04:51 localhost systemd[1]: Started libcrun container. Dec 5 03:04:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51da69f3c77ca864fe537c107c307e006f1c860b5abf612a97e250c0a083ad91/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51da69f3c77ca864fe537c107c307e006f1c860b5abf612a97e250c0a083ad91/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:51 localhost podman[51159]: 2025-12-05 08:04:51.711319504 +0000 UTC m=+1.278520410 container init 596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_puppet_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=container-puppet-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:04:51 localhost podman[51159]: 2025-12-05 08:04:51.723806117 +0000 UTC m=+1.291007013 container start 596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=) Dec 5 03:04:51 localhost podman[51159]: 2025-12-05 08:04:51.724119437 +0000 UTC m=+1.291320383 container attach 596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, container_name=container-puppet-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, architecture=x86_64) Dec 5 03:04:51 localhost systemd[1]: Started libpod-conmon-087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78.scope. Dec 5 03:04:51 localhost systemd[1]: Started libcrun container. Dec 5 03:04:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db559d2ef90ada0da6d1db35eb32978dea07983229020726ad711f251fdf2a1/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:51 localhost podman[51164]: 2025-12-05 08:04:51.762097053 +0000 UTC m=+1.324242183 container init 087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, com.redhat.component=openstack-collectd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Dec 5 03:04:51 localhost podman[51164]: 2025-12-05 08:04:51.768371355 +0000 UTC m=+1.330516455 container start 087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-collectd, container_name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container) Dec 5 03:04:51 localhost podman[51164]: 2025-12-05 08:04:51.768564821 +0000 UTC m=+1.330709991 container attach 087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Dec 5 03:04:51 localhost systemd[1]: Started libpod-conmon-6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0.scope. Dec 5 03:04:51 localhost systemd[1]: Started libcrun container. Dec 5 03:04:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83f227a3727657fa28ad038d3faa4acf2257033c4c4d3daa819bb5632ffbdd2/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:51 localhost podman[51145]: 2025-12-05 08:04:51.820556947 +0000 UTC m=+1.395816050 container init 6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 5 03:04:51 localhost podman[51145]: 2025-12-05 08:04:51.826841749 +0000 UTC m=+1.402100872 container start 6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com) Dec 5 03:04:51 localhost podman[51145]: 2025-12-05 08:04:51.827232051 +0000 UTC m=+1.402491154 container attach 6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, version=17.1.12, config_id=tripleo_puppet_step1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt) Dec 5 03:04:52 localhost podman[51014]: 2025-12-05 08:04:50.336368826 +0000 UTC m=+0.050403828 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 5 03:04:52 localhost podman[51342]: 2025-12-05 08:04:52.891734262 +0000 UTC m=+0.068013518 container create d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-central, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-central-container, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-19T00:11:59Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-ceilometer) Dec 5 03:04:52 localhost systemd[1]: Started libpod-conmon-d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3.scope. Dec 5 03:04:52 localhost systemd[1]: Started libcrun container. Dec 5 03:04:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa8c64aed3dd43632e0f6a8595077465fd2285f2415c20f2fb7f9b03f641646/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:52 localhost podman[51342]: 2025-12-05 08:04:52.949619989 +0000 UTC m=+0.125899275 container init d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-central-container, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, vcs-type=git) Dec 5 03:04:52 localhost podman[51342]: 2025-12-05 08:04:52.8610639 +0000 UTC m=+0.037343176 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 5 03:04:52 localhost podman[51342]: 2025-12-05 08:04:52.993093252 +0000 UTC m=+0.169372518 container start d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=container-puppet-ceilometer, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 5 03:04:52 localhost podman[51342]: 2025-12-05 08:04:52.993307339 +0000 UTC m=+0.169586605 container attach d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, vcs-type=git, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, com.redhat.component=openstack-ceilometer-central-container, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public) Dec 5 03:04:53 localhost puppet-user[51229]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:53 localhost puppet-user[51229]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:53 localhost puppet-user[51229]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:53 localhost puppet-user[51229]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51293]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:53 localhost puppet-user[51293]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:53 localhost puppet-user[51293]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:53 localhost puppet-user[51293]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51229]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:53 localhost puppet-user[51229]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51229]: Notice: Accepting previously invalid value for target type 'Integer' Dec 5 03:04:53 localhost puppet-user[51293]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:53 localhost puppet-user[51293]: (file & line not available) Dec 5 03:04:53 localhost ovs-vsctl[51625]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 5 03:04:53 localhost puppet-user[51258]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:53 localhost puppet-user[51258]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:53 localhost puppet-user[51258]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:53 localhost puppet-user[51258]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51229]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.12 seconds Dec 5 03:04:53 localhost puppet-user[51276]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:53 localhost puppet-user[51276]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:53 localhost puppet-user[51258]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:53 localhost puppet-user[51258]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51276]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:53 localhost puppet-user[51276]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}fb0cb1700aee047453c23ecdfa0763ed38a8623a9d6eb61901ca6f9b891de5d9' Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Dec 5 03:04:53 localhost puppet-user[51229]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Dec 5 03:04:53 localhost puppet-user[51258]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.10 seconds Dec 5 03:04:53 localhost puppet-user[51229]: Notice: Applied catalog in 0.03 seconds Dec 5 03:04:53 localhost puppet-user[51229]: Application: Dec 5 03:04:53 localhost puppet-user[51229]: Initial environment: production Dec 5 03:04:53 localhost puppet-user[51229]: Converged environment: production Dec 5 03:04:53 localhost puppet-user[51229]: Run mode: user Dec 5 03:04:53 localhost puppet-user[51229]: Changes: Dec 5 03:04:53 localhost puppet-user[51229]: Total: 7 Dec 5 03:04:53 localhost puppet-user[51229]: Events: Dec 5 03:04:53 localhost puppet-user[51229]: Success: 7 Dec 5 03:04:53 localhost puppet-user[51229]: Total: 7 Dec 5 03:04:53 localhost puppet-user[51229]: Resources: Dec 5 03:04:53 localhost puppet-user[51229]: Skipped: 13 Dec 5 03:04:53 localhost puppet-user[51229]: Changed: 5 Dec 5 03:04:53 localhost puppet-user[51229]: Out of sync: 5 Dec 5 03:04:53 localhost puppet-user[51229]: Total: 20 Dec 5 03:04:53 localhost puppet-user[51229]: Time: Dec 5 03:04:53 localhost puppet-user[51229]: File: 0.01 Dec 5 03:04:53 localhost puppet-user[51229]: Transaction evaluation: 0.03 Dec 5 03:04:53 localhost puppet-user[51229]: Catalog application: 0.03 Dec 5 03:04:53 localhost puppet-user[51229]: Config retrieval: 0.16 Dec 5 03:04:53 localhost puppet-user[51229]: Last run: 1764921893 Dec 5 03:04:53 localhost puppet-user[51229]: Total: 0.03 Dec 5 03:04:53 localhost puppet-user[51229]: Version: Dec 5 03:04:53 localhost puppet-user[51229]: Config: 1764921893 Dec 5 03:04:53 localhost puppet-user[51229]: Puppet: 7.10.0 Dec 5 03:04:53 localhost puppet-user[51276]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:53 localhost puppet-user[51276]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51258]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Dec 5 03:04:53 localhost puppet-user[51258]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Dec 5 03:04:53 localhost puppet-user[51309]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:53 localhost puppet-user[51309]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:53 localhost puppet-user[51309]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:53 localhost puppet-user[51309]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51258]: Notice: Applied catalog in 0.04 seconds Dec 5 03:04:53 localhost puppet-user[51258]: Application: Dec 5 03:04:53 localhost puppet-user[51258]: Initial environment: production Dec 5 03:04:53 localhost puppet-user[51258]: Converged environment: production Dec 5 03:04:53 localhost puppet-user[51258]: Run mode: user Dec 5 03:04:53 localhost puppet-user[51258]: Changes: Dec 5 03:04:53 localhost puppet-user[51258]: Total: 2 Dec 5 03:04:53 localhost puppet-user[51258]: Events: Dec 5 03:04:53 localhost puppet-user[51258]: Success: 2 Dec 5 03:04:53 localhost puppet-user[51258]: Total: 2 Dec 5 03:04:53 localhost puppet-user[51258]: Resources: Dec 5 03:04:53 localhost puppet-user[51258]: Changed: 2 Dec 5 03:04:53 localhost puppet-user[51258]: Out of sync: 2 Dec 5 03:04:53 localhost puppet-user[51258]: Skipped: 7 Dec 5 03:04:53 localhost puppet-user[51258]: Total: 9 Dec 5 03:04:53 localhost puppet-user[51258]: Time: Dec 5 03:04:53 localhost puppet-user[51258]: File: 0.00 Dec 5 03:04:53 localhost puppet-user[51258]: Cron: 0.01 Dec 5 03:04:53 localhost puppet-user[51258]: Transaction evaluation: 0.04 Dec 5 03:04:53 localhost puppet-user[51258]: Catalog application: 0.04 Dec 5 03:04:53 localhost puppet-user[51258]: Config retrieval: 0.13 Dec 5 03:04:53 localhost puppet-user[51258]: Last run: 1764921893 Dec 5 03:04:53 localhost puppet-user[51258]: Total: 0.04 Dec 5 03:04:53 localhost puppet-user[51258]: Version: Dec 5 03:04:53 localhost puppet-user[51258]: Config: 1764921893 Dec 5 03:04:53 localhost puppet-user[51258]: Puppet: 7.10.0 Dec 5 03:04:53 localhost puppet-user[51276]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.14 seconds Dec 5 03:04:53 localhost puppet-user[51309]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:53 localhost puppet-user[51309]: (file & line not available) Dec 5 03:04:53 localhost puppet-user[51276]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Dec 5 03:04:53 localhost puppet-user[51276]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Dec 5 03:04:53 localhost puppet-user[51276]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Dec 5 03:04:54 localhost puppet-user[51293]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.38 seconds Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Dec 5 03:04:54 localhost puppet-user[51309]: in a future release. Use nova::cinder::os_region_name instead Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Dec 5 03:04:54 localhost puppet-user[51309]: in a future release. Use nova::cinder::catalog_info instead Dec 5 03:04:54 localhost systemd[1]: libpod-484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904.scope: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: libpod-484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904.scope: Consumed 2.296s CPU time. Dec 5 03:04:54 localhost podman[51755]: 2025-12-05 08:04:54.112350254 +0000 UTC m=+0.046429517 container died 484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Dec 5 03:04:54 localhost systemd[1]: libpod-17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0.scope: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: libpod-17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0.scope: Consumed 2.245s CPU time. Dec 5 03:04:54 localhost podman[51142]: 2025-12-05 08:04:54.129432068 +0000 UTC m=+3.708235850 container died 17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, io.buildah.version=1.41.4) Dec 5 03:04:54 localhost systemd[1]: tmp-crun.qUiYtC.mount: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Dec 5 03:04:54 localhost podman[51777]: 2025-12-05 08:04:54.223276188 +0000 UTC m=+0.082159143 container cleanup 17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Dec 5 03:04:54 localhost systemd[1]: libpod-conmon-17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0.scope: Deactivated successfully. Dec 5 03:04:54 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Dec 5 03:04:54 localhost podman[51755]: 2025-12-05 08:04:54.254023271 +0000 UTC m=+0.188102504 container cleanup 484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, container_name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Dec 5 03:04:54 localhost systemd[1]: libpod-conmon-484a5683cd2ad9a0085017db5dab50c547949453626e5a0e2ce53a94b123e904.scope: Deactivated successfully. Dec 5 03:04:54 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}dee3f10cb1ff461ac3f1e743a5ef3f06993398c6c829895de1dae7f242a64b39' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Dec 5 03:04:54 localhost puppet-user[51293]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Dec 5 03:04:54 localhost puppet-user[51293]: Notice: Applied catalog in 0.27 seconds Dec 5 03:04:54 localhost puppet-user[51293]: Application: Dec 5 03:04:54 localhost puppet-user[51293]: Initial environment: production Dec 5 03:04:54 localhost puppet-user[51293]: Converged environment: production Dec 5 03:04:54 localhost puppet-user[51293]: Run mode: user Dec 5 03:04:54 localhost puppet-user[51293]: Changes: Dec 5 03:04:54 localhost puppet-user[51293]: Total: 43 Dec 5 03:04:54 localhost puppet-user[51293]: Events: Dec 5 03:04:54 localhost puppet-user[51293]: Success: 43 Dec 5 03:04:54 localhost puppet-user[51293]: Total: 43 Dec 5 03:04:54 localhost puppet-user[51293]: Resources: Dec 5 03:04:54 localhost puppet-user[51293]: Skipped: 14 Dec 5 03:04:54 localhost puppet-user[51293]: Changed: 38 Dec 5 03:04:54 localhost puppet-user[51293]: Out of sync: 38 Dec 5 03:04:54 localhost puppet-user[51293]: Total: 82 Dec 5 03:04:54 localhost puppet-user[51293]: Time: Dec 5 03:04:54 localhost puppet-user[51293]: Concat fragment: 0.00 Dec 5 03:04:54 localhost puppet-user[51293]: Concat file: 0.00 Dec 5 03:04:54 localhost puppet-user[51293]: File: 0.10 Dec 5 03:04:54 localhost puppet-user[51293]: Transaction evaluation: 0.27 Dec 5 03:04:54 localhost puppet-user[51293]: Catalog application: 0.27 Dec 5 03:04:54 localhost puppet-user[51293]: Config retrieval: 0.46 Dec 5 03:04:54 localhost puppet-user[51293]: Last run: 1764921894 Dec 5 03:04:54 localhost puppet-user[51293]: Total: 0.27 Dec 5 03:04:54 localhost puppet-user[51293]: Version: Dec 5 03:04:54 localhost puppet-user[51293]: Config: 1764921893 Dec 5 03:04:54 localhost puppet-user[51293]: Puppet: 7.10.0 Dec 5 03:04:54 localhost puppet-user[51276]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Dec 5 03:04:54 localhost puppet-user[51276]: Notice: Applied catalog in 0.49 seconds Dec 5 03:04:54 localhost puppet-user[51276]: Application: Dec 5 03:04:54 localhost puppet-user[51276]: Initial environment: production Dec 5 03:04:54 localhost puppet-user[51276]: Converged environment: production Dec 5 03:04:54 localhost puppet-user[51276]: Run mode: user Dec 5 03:04:54 localhost puppet-user[51276]: Changes: Dec 5 03:04:54 localhost puppet-user[51276]: Total: 4 Dec 5 03:04:54 localhost puppet-user[51276]: Events: Dec 5 03:04:54 localhost puppet-user[51276]: Success: 4 Dec 5 03:04:54 localhost puppet-user[51276]: Total: 4 Dec 5 03:04:54 localhost puppet-user[51276]: Resources: Dec 5 03:04:54 localhost puppet-user[51276]: Changed: 4 Dec 5 03:04:54 localhost puppet-user[51276]: Out of sync: 4 Dec 5 03:04:54 localhost puppet-user[51276]: Skipped: 8 Dec 5 03:04:54 localhost puppet-user[51276]: Total: 13 Dec 5 03:04:54 localhost puppet-user[51276]: Time: Dec 5 03:04:54 localhost puppet-user[51276]: File: 0.00 Dec 5 03:04:54 localhost puppet-user[51276]: Exec: 0.05 Dec 5 03:04:54 localhost puppet-user[51276]: Config retrieval: 0.18 Dec 5 03:04:54 localhost puppet-user[51276]: Augeas: 0.42 Dec 5 03:04:54 localhost puppet-user[51276]: Transaction evaluation: 0.48 Dec 5 03:04:54 localhost puppet-user[51276]: Catalog application: 0.49 Dec 5 03:04:54 localhost puppet-user[51276]: Last run: 1764921894 Dec 5 03:04:54 localhost puppet-user[51276]: Total: 0.49 Dec 5 03:04:54 localhost puppet-user[51276]: Version: Dec 5 03:04:54 localhost puppet-user[51276]: Config: 1764921893 Dec 5 03:04:54 localhost puppet-user[51276]: Puppet: 7.10.0 Dec 5 03:04:54 localhost systemd[1]: var-lib-containers-storage-overlay-ba6e2221a6fb4573209f1900bac1c26a012fe88f558913e381fcbd7377aeb8f8-merged.mount: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17da04a9d84a513503d4ac2ac6d30fe00c987b55233e8745e8522f539b5367a0-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: var-lib-containers-storage-overlay-1a5aafadc3f6652404db23d10ded19788ad6c73ce23f65e7215c7f5ecc619d5e-merged.mount: Deactivated successfully. Dec 5 03:04:54 localhost puppet-user[51309]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Dec 5 03:04:54 localhost podman[51916]: 2025-12-05 08:04:54.600142134 +0000 UTC m=+0.063913493 container create 2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, build-date=2025-11-18T22:49:49Z, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, version=17.1.12) Dec 5 03:04:54 localhost systemd[1]: Started libpod-conmon-2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047.scope. Dec 5 03:04:54 localhost podman[51922]: 2025-12-05 08:04:54.628299538 +0000 UTC m=+0.081662987 container create 63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=container-puppet-ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, config_id=tripleo_puppet_step1) Dec 5 03:04:54 localhost systemd[1]: Started libcrun container. Dec 5 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01dc81d9dd56e2b2370efd266bd186bb72b8d34f6e6726003daf4c1cc49fcca9/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:54 localhost systemd[1]: libpod-596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b.scope: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: libpod-596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b.scope: Consumed 2.700s CPU time. Dec 5 03:04:54 localhost podman[51916]: 2025-12-05 08:04:54.6433444 +0000 UTC m=+0.107115769 container init 2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, version=17.1.12, distribution-scope=public, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:04:54 localhost podman[51159]: 2025-12-05 08:04:54.644695802 +0000 UTC m=+4.211896718 container died 596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, container_name=container-puppet-iscsid, io.buildah.version=1.41.4) Dec 5 03:04:54 localhost systemd[1]: Started libpod-conmon-63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38.scope. Dec 5 03:04:54 localhost systemd[1]: Started libcrun container. Dec 5 03:04:54 localhost podman[51916]: 2025-12-05 08:04:54.568621227 +0000 UTC m=+0.032392616 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 5 03:04:54 localhost systemd[1]: libpod-087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78.scope: Deactivated successfully. Dec 5 03:04:54 localhost systemd[1]: libpod-087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78.scope: Consumed 2.704s CPU time. Dec 5 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d026a35085ad52897bc3374aed756aa80ec1ba6263c403478a100f8c0e75ebbb/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d026a35085ad52897bc3374aed756aa80ec1ba6263c403478a100f8c0e75ebbb/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:54 localhost podman[51922]: 2025-12-05 08:04:54.57980566 +0000 UTC m=+0.033169109 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 5 03:04:54 localhost podman[51922]: 2025-12-05 08:04:54.695349176 +0000 UTC m=+0.148712635 container init 63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:34:05Z) Dec 5 03:04:54 localhost podman[51916]: 2025-12-05 08:04:54.702158365 +0000 UTC m=+0.165929714 container start 2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, tcib_managed=true, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, container_name=container-puppet-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044) Dec 5 03:04:54 localhost podman[51916]: 2025-12-05 08:04:54.702599728 +0000 UTC m=+0.166371127 container attach 2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, tcib_managed=true, container_name=container-puppet-rsyslog, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:04:54 localhost podman[51922]: 2025-12-05 08:04:54.704632841 +0000 UTC m=+0.157996290 container start 63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller) Dec 5 03:04:54 localhost podman[51922]: 2025-12-05 08:04:54.70494832 +0000 UTC m=+0.158311809 container attach 63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=container-puppet-ovn_controller, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:04:54 localhost podman[51994]: 2025-12-05 08:04:54.765183349 +0000 UTC m=+0.109616025 container cleanup 596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, distribution-scope=public, build-date=2025-11-18T23:44:13Z, container_name=container-puppet-iscsid, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 5 03:04:54 localhost systemd[1]: libpod-conmon-596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b.scope: Deactivated successfully. Dec 5 03:04:54 localhost podman[51164]: 2025-12-05 08:04:54.795676385 +0000 UTC m=+4.357821485 container died 087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, managed_by=tripleo_ansible) Dec 5 03:04:54 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 5 03:04:54 localhost podman[52027]: 2025-12-05 08:04:54.832730092 +0000 UTC m=+0.124222904 container cleanup 087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, build-date=2025-11-18T22:51:28Z, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, container_name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:04:54 localhost systemd[1]: libpod-conmon-087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78.scope: Deactivated successfully. Dec 5 03:04:54 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 5 03:04:55 localhost puppet-user[51390]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:55 localhost puppet-user[51390]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:55 localhost puppet-user[51390]: (file & line not available) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:55 localhost puppet-user[51390]: (file & line not available) Dec 5 03:04:55 localhost puppet-user[51309]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 1.29 seconds Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Dec 5 03:04:55 localhost puppet-user[51390]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}0afade6653aba11fc80cb8b8315af6d8dc0b0370c920f3d319c7bc1ad3fe1536' Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.41 seconds Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Dec 5 03:04:55 localhost puppet-user[51309]: Warning: Empty environment setting 'TLS_PASSWORD' Dec 5 03:04:55 localhost puppet-user[51309]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Dec 5 03:04:55 localhost systemd[1]: var-lib-containers-storage-overlay-51da69f3c77ca864fe537c107c307e006f1c860b5abf612a97e250c0a083ad91-merged.mount: Deactivated successfully. Dec 5 03:04:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-596f51604a711d7849bd2da48c558a141b30a912a8d8b5f495e3e5b0ff3d1a5b-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:55 localhost systemd[1]: var-lib-containers-storage-overlay-2db559d2ef90ada0da6d1db35eb32978dea07983229020726ad711f251fdf2a1-merged.mount: Deactivated successfully. Dec 5 03:04:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-087c70dc1c65d98bcc92318106106f6f72101e85ca4ebccfee69c2c4802c3f78-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}9d3f91d01b5791cf3407841cca0c92f3da22221b7d210843d9ee124ba6e4fb6f' Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Dec 5 03:04:55 localhost puppet-user[51390]: Notice: Applied catalog in 0.41 seconds Dec 5 03:04:55 localhost puppet-user[51390]: Application: Dec 5 03:04:55 localhost puppet-user[51390]: Initial environment: production Dec 5 03:04:55 localhost puppet-user[51390]: Converged environment: production Dec 5 03:04:55 localhost puppet-user[51390]: Run mode: user Dec 5 03:04:55 localhost puppet-user[51390]: Changes: Dec 5 03:04:55 localhost puppet-user[51390]: Total: 31 Dec 5 03:04:55 localhost puppet-user[51390]: Events: Dec 5 03:04:55 localhost puppet-user[51390]: Success: 31 Dec 5 03:04:55 localhost puppet-user[51390]: Total: 31 Dec 5 03:04:55 localhost puppet-user[51390]: Resources: Dec 5 03:04:55 localhost puppet-user[51390]: Skipped: 22 Dec 5 03:04:55 localhost puppet-user[51390]: Changed: 31 Dec 5 03:04:55 localhost puppet-user[51390]: Out of sync: 31 Dec 5 03:04:55 localhost puppet-user[51390]: Total: 151 Dec 5 03:04:55 localhost puppet-user[51390]: Time: Dec 5 03:04:55 localhost puppet-user[51390]: Package: 0.02 Dec 5 03:04:55 localhost puppet-user[51390]: Ceilometer config: 0.33 Dec 5 03:04:55 localhost puppet-user[51390]: Transaction evaluation: 0.41 Dec 5 03:04:55 localhost puppet-user[51390]: Catalog application: 0.41 Dec 5 03:04:55 localhost puppet-user[51390]: Config retrieval: 0.48 Dec 5 03:04:55 localhost puppet-user[51390]: Last run: 1764921895 Dec 5 03:04:55 localhost puppet-user[51390]: Resources: 0.00 Dec 5 03:04:55 localhost puppet-user[51390]: Total: 0.42 Dec 5 03:04:55 localhost puppet-user[51390]: Version: Dec 5 03:04:55 localhost puppet-user[51390]: Config: 1764921895 Dec 5 03:04:55 localhost puppet-user[51390]: Puppet: 7.10.0 Dec 5 03:04:55 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Dec 5 03:04:56 localhost systemd[1]: libpod-d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3.scope: Deactivated successfully. Dec 5 03:04:56 localhost systemd[1]: libpod-d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3.scope: Consumed 3.035s CPU time. Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Dec 5 03:04:56 localhost podman[52296]: 2025-12-05 08:04:56.406080949 +0000 UTC m=+0.045576140 container died d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, com.redhat.component=openstack-ceilometer-central-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:59Z, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-central, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central) Dec 5 03:04:56 localhost systemd[1]: tmp-crun.VgPi1N.mount: Deactivated successfully. Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Dec 5 03:04:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Dec 5 03:04:56 localhost systemd[1]: var-lib-containers-storage-overlay-bfa8c64aed3dd43632e0f6a8595077465fd2285f2415c20f2fb7f9b03f641646-merged.mount: Deactivated successfully. Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Dec 5 03:04:56 localhost podman[52296]: 2025-12-05 08:04:56.475447518 +0000 UTC m=+0.114942669 container cleanup d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, version=17.1.12, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=container-puppet-ceilometer, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-central-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:59Z, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, config_id=tripleo_puppet_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., release=1761123044) Dec 5 03:04:56 localhost systemd[1]: libpod-conmon-d4ce7191bf89afa6f91d15d75fa81321cadff02d542c31096763dda3994cf7d3.scope: Deactivated successfully. Dec 5 03:04:56 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 5 03:04:56 localhost puppet-user[52026]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:56 localhost puppet-user[52026]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:56 localhost puppet-user[52026]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:56 localhost puppet-user[52026]: (file & line not available) Dec 5 03:04:56 localhost puppet-user[52026]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:56 localhost puppet-user[52026]: (file & line not available) Dec 5 03:04:56 localhost puppet-user[52046]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:04:56 localhost puppet-user[52046]: (file: /etc/puppet/hiera.yaml) Dec 5 03:04:56 localhost puppet-user[52046]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:04:56 localhost puppet-user[52046]: (file & line not available) Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Dec 5 03:04:56 localhost puppet-user[52046]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:04:56 localhost puppet-user[52046]: (file & line not available) Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Dec 5 03:04:56 localhost puppet-user[52026]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.26 seconds Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Dec 5 03:04:56 localhost puppet-user[52046]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.24 seconds Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Dec 5 03:04:56 localhost puppet-user[52026]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Dec 5 03:04:56 localhost ovs-vsctl[52473]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Dec 5 03:04:56 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Dec 5 03:04:56 localhost puppet-user[52026]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Dec 5 03:04:56 localhost puppet-user[52026]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}8aca20c0ac3ecce2dbaf265930451280239ae466f316b7cea58b9161b2784cf8' Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Dec 5 03:04:56 localhost ovs-vsctl[52475]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Dec 5 03:04:56 localhost puppet-user[52026]: Notice: Applied catalog in 0.12 seconds Dec 5 03:04:56 localhost puppet-user[52026]: Application: Dec 5 03:04:56 localhost puppet-user[52026]: Initial environment: production Dec 5 03:04:56 localhost puppet-user[52026]: Converged environment: production Dec 5 03:04:56 localhost puppet-user[52026]: Run mode: user Dec 5 03:04:56 localhost puppet-user[52026]: Changes: Dec 5 03:04:56 localhost puppet-user[52026]: Total: 3 Dec 5 03:04:56 localhost puppet-user[52026]: Events: Dec 5 03:04:56 localhost puppet-user[52026]: Success: 3 Dec 5 03:04:56 localhost puppet-user[52026]: Total: 3 Dec 5 03:04:56 localhost puppet-user[52026]: Resources: Dec 5 03:04:56 localhost puppet-user[52026]: Skipped: 11 Dec 5 03:04:56 localhost puppet-user[52026]: Changed: 3 Dec 5 03:04:56 localhost puppet-user[52026]: Out of sync: 3 Dec 5 03:04:56 localhost puppet-user[52026]: Total: 25 Dec 5 03:04:56 localhost puppet-user[52026]: Time: Dec 5 03:04:56 localhost puppet-user[52026]: Concat file: 0.00 Dec 5 03:04:56 localhost puppet-user[52026]: Concat fragment: 0.00 Dec 5 03:04:56 localhost puppet-user[52026]: File: 0.02 Dec 5 03:04:56 localhost puppet-user[52026]: Transaction evaluation: 0.11 Dec 5 03:04:56 localhost puppet-user[52026]: Catalog application: 0.12 Dec 5 03:04:56 localhost puppet-user[52026]: Config retrieval: 0.31 Dec 5 03:04:56 localhost puppet-user[52026]: Last run: 1764921896 Dec 5 03:04:56 localhost puppet-user[52026]: Total: 0.12 Dec 5 03:04:56 localhost puppet-user[52026]: Version: Dec 5 03:04:56 localhost puppet-user[52026]: Config: 1764921896 Dec 5 03:04:56 localhost puppet-user[52026]: Puppet: 7.10.0 Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Dec 5 03:04:56 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Dec 5 03:04:56 localhost ovs-vsctl[52477]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.106 Dec 5 03:04:56 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Dec 5 03:04:56 localhost ovs-vsctl[52480]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005546419.localdomain Dec 5 03:04:56 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005546419.novalocal' to 'np0005546419.localdomain' Dec 5 03:04:56 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52482]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52490]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52492]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52497]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52499]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52501]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52507]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:15:ea:5f Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52516]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52521]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Dec 5 03:04:57 localhost ovs-vsctl[52523]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Dec 5 03:04:57 localhost puppet-user[52046]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Dec 5 03:04:57 localhost systemd[1]: libpod-2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047.scope: Deactivated successfully. Dec 5 03:04:57 localhost systemd[1]: libpod-2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047.scope: Consumed 2.418s CPU time. Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Dec 5 03:04:57 localhost puppet-user[52046]: Notice: Applied catalog in 0.40 seconds Dec 5 03:04:57 localhost puppet-user[52046]: Application: Dec 5 03:04:57 localhost puppet-user[52046]: Initial environment: production Dec 5 03:04:57 localhost puppet-user[52046]: Converged environment: production Dec 5 03:04:57 localhost puppet-user[52046]: Run mode: user Dec 5 03:04:57 localhost puppet-user[52046]: Changes: Dec 5 03:04:57 localhost puppet-user[52046]: Total: 14 Dec 5 03:04:57 localhost puppet-user[52046]: Events: Dec 5 03:04:57 localhost puppet-user[52046]: Success: 14 Dec 5 03:04:57 localhost puppet-user[52046]: Total: 14 Dec 5 03:04:57 localhost puppet-user[52046]: Resources: Dec 5 03:04:57 localhost puppet-user[52046]: Skipped: 12 Dec 5 03:04:57 localhost podman[51916]: 2025-12-05 08:04:57.277421081 +0000 UTC m=+2.741192450 container died 2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, distribution-scope=public, name=rhosp17/openstack-rsyslog, container_name=container-puppet-rsyslog, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:04:57 localhost puppet-user[52046]: Changed: 14 Dec 5 03:04:57 localhost puppet-user[52046]: Out of sync: 14 Dec 5 03:04:57 localhost puppet-user[52046]: Total: 29 Dec 5 03:04:57 localhost puppet-user[52046]: Time: Dec 5 03:04:57 localhost puppet-user[52046]: Exec: 0.02 Dec 5 03:04:57 localhost puppet-user[52046]: Config retrieval: 0.27 Dec 5 03:04:57 localhost puppet-user[52046]: Vs config: 0.33 Dec 5 03:04:57 localhost puppet-user[52046]: Transaction evaluation: 0.39 Dec 5 03:04:57 localhost puppet-user[52046]: Catalog application: 0.40 Dec 5 03:04:57 localhost puppet-user[52046]: Last run: 1764921897 Dec 5 03:04:57 localhost puppet-user[52046]: Total: 0.40 Dec 5 03:04:57 localhost puppet-user[52046]: Version: Dec 5 03:04:57 localhost puppet-user[52046]: Config: 1764921896 Dec 5 03:04:57 localhost puppet-user[52046]: Puppet: 7.10.0 Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}c3826591c6773695ff03c6589172449facdf16883c7a86fceee478fd480e36dd' Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Dec 5 03:04:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:57 localhost systemd[1]: var-lib-containers-storage-overlay-01dc81d9dd56e2b2370efd266bd186bb72b8d34f6e6726003daf4c1cc49fcca9-merged.mount: Deactivated successfully. Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Dec 5 03:04:57 localhost podman[52539]: 2025-12-05 08:04:57.505404618 +0000 UTC m=+0.217835806 container cleanup 2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=container-puppet-rsyslog, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog) Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Dec 5 03:04:57 localhost systemd[1]: libpod-63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38.scope: Deactivated successfully. Dec 5 03:04:57 localhost systemd[1]: libpod-63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38.scope: Consumed 2.796s CPU time. Dec 5 03:04:57 localhost podman[51922]: 2025-12-05 08:04:57.678098689 +0000 UTC m=+3.131462148 container died 63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4) Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Dec 5 03:04:57 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Dec 5 03:04:58 localhost systemd[1]: libpod-conmon-2d89ecb70a79e1e3b4d9ca90e11527dfbf963e513cc03c0a7989a01790c6f047.scope: Deactivated successfully. Dec 5 03:04:58 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 5 03:04:58 localhost podman[52142]: 2025-12-05 08:04:55.006525106 +0000 UTC m=+0.029335522 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 5 03:04:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38-userdata-shm.mount: Deactivated successfully. Dec 5 03:04:58 localhost systemd[1]: var-lib-containers-storage-overlay-d026a35085ad52897bc3374aed756aa80ec1ba6263c403478a100f8c0e75ebbb-merged.mount: Deactivated successfully. Dec 5 03:04:58 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Dec 5 03:04:58 localhost podman[52599]: 2025-12-05 08:04:58.767637517 +0000 UTC m=+1.080097850 container cleanup 63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, container_name=container-puppet-ovn_controller, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vcs-type=git, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:04:58 localhost systemd[1]: libpod-conmon-63802f2e87f88408829eb9d55693f3f24ef3d23f7d793bdd07085f9603be4b38.scope: Deactivated successfully. Dec 5 03:04:58 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 5 03:04:58 localhost podman[52682]: 2025-12-05 08:04:58.933823328 +0000 UTC m=+0.086214058 container create 56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-server-container, build-date=2025-11-19T00:23:27Z, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_puppet_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-server, architecture=x86_64, distribution-scope=public, container_name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team) Dec 5 03:04:58 localhost systemd[1]: Started libpod-conmon-56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63.scope. Dec 5 03:04:58 localhost systemd[1]: Started libcrun container. Dec 5 03:04:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5183feb33da2523290cc6db0e737d1f895bfa542b761e9867256814f433512d/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 5 03:04:58 localhost podman[52682]: 2025-12-05 08:04:58.886364761 +0000 UTC m=+0.038755581 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 5 03:04:58 localhost podman[52682]: 2025-12-05 08:04:58.996397738 +0000 UTC m=+0.148788528 container init 56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_puppet_step1, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:23:27Z, summary=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 5 03:04:59 localhost podman[52682]: 2025-12-05 08:04:59.008646973 +0000 UTC m=+0.161037713 container start 56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, config_id=tripleo_puppet_step1, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:23:27Z, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=container-puppet-neutron, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public) Dec 5 03:04:59 localhost podman[52682]: 2025-12-05 08:04:59.008887442 +0000 UTC m=+0.161278182 container attach 56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, build-date=2025-11-19T00:23:27Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-server, container_name=container-puppet-neutron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Dec 5 03:04:59 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Dec 5 03:05:00 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Dec 5 03:05:00 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Dec 5 03:05:00 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Dec 5 03:05:00 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Dec 5 03:05:00 localhost puppet-user[51309]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Dec 5 03:05:00 localhost puppet-user[51309]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3ccd56cc76ec60fa08fd698d282c9c89b1e8c485a00f47d57569ed8f6f8a16e4' Dec 5 03:05:00 localhost puppet-user[51309]: Notice: Applied catalog in 4.76 seconds Dec 5 03:05:00 localhost puppet-user[51309]: Application: Dec 5 03:05:00 localhost puppet-user[51309]: Initial environment: production Dec 5 03:05:00 localhost puppet-user[51309]: Converged environment: production Dec 5 03:05:00 localhost puppet-user[51309]: Run mode: user Dec 5 03:05:00 localhost puppet-user[51309]: Changes: Dec 5 03:05:00 localhost puppet-user[51309]: Total: 183 Dec 5 03:05:00 localhost puppet-user[51309]: Events: Dec 5 03:05:00 localhost puppet-user[51309]: Success: 183 Dec 5 03:05:00 localhost puppet-user[51309]: Total: 183 Dec 5 03:05:00 localhost puppet-user[51309]: Resources: Dec 5 03:05:00 localhost puppet-user[51309]: Changed: 183 Dec 5 03:05:00 localhost puppet-user[51309]: Out of sync: 183 Dec 5 03:05:00 localhost puppet-user[51309]: Skipped: 57 Dec 5 03:05:00 localhost puppet-user[51309]: Total: 487 Dec 5 03:05:00 localhost puppet-user[51309]: Time: Dec 5 03:05:00 localhost puppet-user[51309]: Concat file: 0.00 Dec 5 03:05:00 localhost puppet-user[51309]: Concat fragment: 0.00 Dec 5 03:05:00 localhost puppet-user[51309]: Anchor: 0.00 Dec 5 03:05:00 localhost puppet-user[51309]: File line: 0.00 Dec 5 03:05:00 localhost puppet-user[51309]: Virtlogd config: 0.00 Dec 5 03:05:00 localhost puppet-user[51309]: Virtstoraged config: 0.01 Dec 5 03:05:00 localhost puppet-user[51309]: Virtqemud config: 0.01 Dec 5 03:05:00 localhost puppet-user[51309]: Virtsecretd config: 0.02 Dec 5 03:05:00 localhost puppet-user[51309]: Package: 0.02 Dec 5 03:05:00 localhost puppet-user[51309]: Exec: 0.02 Dec 5 03:05:00 localhost puppet-user[51309]: Virtproxyd config: 0.03 Dec 5 03:05:00 localhost puppet-user[51309]: Virtnodedevd config: 0.03 Dec 5 03:05:00 localhost puppet-user[51309]: File: 0.03 Dec 5 03:05:00 localhost puppet-user[51309]: Augeas: 1.20 Dec 5 03:05:00 localhost puppet-user[51309]: Config retrieval: 1.56 Dec 5 03:05:00 localhost puppet-user[51309]: Last run: 1764921900 Dec 5 03:05:00 localhost puppet-user[51309]: Nova config: 3.17 Dec 5 03:05:00 localhost puppet-user[51309]: Transaction evaluation: 4.74 Dec 5 03:05:00 localhost puppet-user[51309]: Catalog application: 4.76 Dec 5 03:05:00 localhost puppet-user[51309]: Resources: 0.00 Dec 5 03:05:00 localhost puppet-user[51309]: Total: 4.76 Dec 5 03:05:00 localhost puppet-user[51309]: Version: Dec 5 03:05:00 localhost puppet-user[51309]: Config: 1764921893 Dec 5 03:05:00 localhost puppet-user[51309]: Puppet: 7.10.0 Dec 5 03:05:00 localhost puppet-user[52729]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Dec 5 03:05:01 localhost systemd[1]: libpod-6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0.scope: Deactivated successfully. Dec 5 03:05:01 localhost systemd[1]: libpod-6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0.scope: Consumed 8.777s CPU time. Dec 5 03:05:01 localhost podman[51145]: 2025-12-05 08:05:01.043848485 +0000 UTC m=+10.619107608 container died 6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, container_name=container-puppet-nova_libvirt) Dec 5 03:05:01 localhost puppet-user[52729]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:05:01 localhost puppet-user[52729]: (file: /etc/puppet/hiera.yaml) Dec 5 03:05:01 localhost puppet-user[52729]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:05:01 localhost puppet-user[52729]: (file & line not available) Dec 5 03:05:01 localhost systemd[1]: tmp-crun.EUf1va.mount: Deactivated successfully. Dec 5 03:05:01 localhost puppet-user[52729]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:05:01 localhost puppet-user[52729]: (file & line not available) Dec 5 03:05:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0-userdata-shm.mount: Deactivated successfully. Dec 5 03:05:01 localhost systemd[1]: var-lib-containers-storage-overlay-e83f227a3727657fa28ad038d3faa4acf2257033c4c4d3daa819bb5632ffbdd2-merged.mount: Deactivated successfully. Dec 5 03:05:01 localhost puppet-user[52729]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Dec 5 03:05:01 localhost podman[52841]: 2025-12-05 08:05:01.207161547 +0000 UTC m=+0.157100082 container cleanup 6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt) Dec 5 03:05:01 localhost systemd[1]: libpod-conmon-6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0.scope: Deactivated successfully. Dec 5 03:05:01 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:05:01 localhost puppet-user[52729]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.66 seconds Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 5 03:05:01 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 5 03:05:02 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Dec 5 03:05:02 localhost puppet-user[52729]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Dec 5 03:05:02 localhost puppet-user[52729]: Notice: Applied catalog in 0.47 seconds Dec 5 03:05:02 localhost puppet-user[52729]: Application: Dec 5 03:05:02 localhost puppet-user[52729]: Initial environment: production Dec 5 03:05:02 localhost puppet-user[52729]: Converged environment: production Dec 5 03:05:02 localhost puppet-user[52729]: Run mode: user Dec 5 03:05:02 localhost puppet-user[52729]: Changes: Dec 5 03:05:02 localhost puppet-user[52729]: Total: 33 Dec 5 03:05:02 localhost puppet-user[52729]: Events: Dec 5 03:05:02 localhost puppet-user[52729]: Success: 33 Dec 5 03:05:02 localhost puppet-user[52729]: Total: 33 Dec 5 03:05:02 localhost puppet-user[52729]: Resources: Dec 5 03:05:02 localhost puppet-user[52729]: Skipped: 21 Dec 5 03:05:02 localhost puppet-user[52729]: Changed: 33 Dec 5 03:05:02 localhost puppet-user[52729]: Out of sync: 33 Dec 5 03:05:02 localhost puppet-user[52729]: Total: 155 Dec 5 03:05:02 localhost puppet-user[52729]: Time: Dec 5 03:05:02 localhost puppet-user[52729]: Resources: 0.00 Dec 5 03:05:02 localhost puppet-user[52729]: Ovn metadata agent config: 0.02 Dec 5 03:05:02 localhost puppet-user[52729]: Neutron config: 0.39 Dec 5 03:05:02 localhost puppet-user[52729]: Transaction evaluation: 0.46 Dec 5 03:05:02 localhost puppet-user[52729]: Catalog application: 0.47 Dec 5 03:05:02 localhost puppet-user[52729]: Config retrieval: 0.73 Dec 5 03:05:02 localhost puppet-user[52729]: Last run: 1764921902 Dec 5 03:05:02 localhost puppet-user[52729]: Total: 0.47 Dec 5 03:05:02 localhost puppet-user[52729]: Version: Dec 5 03:05:02 localhost puppet-user[52729]: Config: 1764921901 Dec 5 03:05:02 localhost puppet-user[52729]: Puppet: 7.10.0 Dec 5 03:05:02 localhost systemd[1]: libpod-56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63.scope: Deactivated successfully. Dec 5 03:05:02 localhost systemd[1]: libpod-56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63.scope: Consumed 3.626s CPU time. Dec 5 03:05:02 localhost podman[52682]: 2025-12-05 08:05:02.771845509 +0000 UTC m=+3.924236329 container died 56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:23:27Z, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=container-puppet-neutron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-server-container, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:05:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63-userdata-shm.mount: Deactivated successfully. Dec 5 03:05:02 localhost systemd[1]: var-lib-containers-storage-overlay-e5183feb33da2523290cc6db0e737d1f895bfa542b761e9867256814f433512d-merged.mount: Deactivated successfully. Dec 5 03:05:02 localhost podman[52917]: 2025-12-05 08:05:02.880961328 +0000 UTC m=+0.102381234 container cleanup 56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-neutron-server, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, com.redhat.component=openstack-neutron-server-container, vcs-type=git, container_name=container-puppet-neutron, managed_by=tripleo_ansible, build-date=2025-11-19T00:23:27Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:05:02 localhost systemd[1]: libpod-conmon-56ae3f9539021e306e64806c4488264116ec628f828dd0de87905d64af699e63.scope: Deactivated successfully. Dec 5 03:05:02 localhost python3[50939]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005546419 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005546419', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 5 03:05:03 localhost python3[52970]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:04 localhost python3[53002]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:05:05 localhost python3[53052]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:05:05 localhost python3[53095]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921904.782558-84565-250121466676240/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:05 localhost python3[53157]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:05:06 localhost python3[53200]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921905.5611885-84565-166358358921793/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:06 localhost python3[53262]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:05:07 localhost python3[53305]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921906.4070196-84800-219549141656372/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:07 localhost python3[53367]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:05:07 localhost python3[53410]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921907.2142885-84820-162795727786049/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:08 localhost python3[53440]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:05:08 localhost systemd[1]: Reloading. Dec 5 03:05:08 localhost systemd-rc-local-generator[53465]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:05:08 localhost systemd-sysv-generator[53468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:05:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:05:08 localhost systemd[1]: Reloading. Dec 5 03:05:08 localhost systemd-rc-local-generator[53500]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:05:08 localhost systemd-sysv-generator[53504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:05:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:05:08 localhost systemd[1]: Starting TripleO Container Shutdown... Dec 5 03:05:08 localhost systemd[1]: Finished TripleO Container Shutdown. Dec 5 03:05:09 localhost python3[53563]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:05:09 localhost python3[53606]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921909.0836911-84857-100220658130045/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:10 localhost python3[53668]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:05:10 localhost python3[53711]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921909.964697-84932-127468377186739/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:11 localhost python3[53741]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:05:11 localhost systemd[1]: Reloading. Dec 5 03:05:11 localhost systemd-rc-local-generator[53770]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:05:11 localhost systemd-sysv-generator[53774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:05:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:05:11 localhost systemd[1]: Reloading. Dec 5 03:05:11 localhost systemd-rc-local-generator[53806]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:05:11 localhost systemd-sysv-generator[53810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:05:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:05:11 localhost systemd[1]: Starting Create netns directory... Dec 5 03:05:11 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 03:05:11 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 03:05:11 localhost systemd[1]: Finished Create netns directory. Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: ec38952662567c94fcd33f4598790a0c Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: da9a0dc7b40588672419e3ce10063e21 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: c48ee961a201e2ecc5561337e7450232 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 7903f5b91b4ea11c31e5f2ba22a27dd3 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 36d3201998d10321ffa6261c2854a42f Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 36d3201998d10321ffa6261c2854a42f Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: a79d47e79d9c2e42edb251b1a5fb6c64 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:12 localhost python3[53834]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: f3fe7c52055154c7f97b988e301af0d7 Dec 5 03:05:13 localhost python3[53892]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 5 03:05:13 localhost podman[53930]: 2025-12-05 08:05:13.822849211 +0000 UTC m=+0.054792663 container create 767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 5 03:05:13 localhost systemd[1]: Started libpod-conmon-767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af.scope. Dec 5 03:05:13 localhost systemd[1]: Started libcrun container. Dec 5 03:05:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49c0bfdfd68cdef0107452428014cc1318b2cf8507d683119189823e098b7e5/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 5 03:05:13 localhost podman[53930]: 2025-12-05 08:05:13.890854258 +0000 UTC m=+0.122797720 container init 767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vendor=Red Hat, Inc., container_name=metrics_qdr_init_logs, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, architecture=x86_64) Dec 5 03:05:13 localhost podman[53930]: 2025-12-05 08:05:13.792413267 +0000 UTC m=+0.024356739 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 5 03:05:13 localhost podman[53930]: 2025-12-05 08:05:13.899940286 +0000 UTC m=+0.131883748 container start 767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, container_name=metrics_qdr_init_logs, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd) Dec 5 03:05:13 localhost podman[53930]: 2025-12-05 08:05:13.900452562 +0000 UTC m=+0.132396014 container attach 767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, container_name=metrics_qdr_init_logs, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:05:13 localhost podman[53930]: 2025-12-05 08:05:13.902892397 +0000 UTC m=+0.134835859 container died 767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, config_id=tripleo_step1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com) Dec 5 03:05:13 localhost systemd[1]: libpod-767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af.scope: Deactivated successfully. Dec 5 03:05:13 localhost podman[53949]: 2025-12-05 08:05:13.99194856 +0000 UTC m=+0.074830228 container cleanup 767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:05:13 localhost systemd[1]: libpod-conmon-767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af.scope: Deactivated successfully. Dec 5 03:05:14 localhost python3[53892]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Dec 5 03:05:14 localhost podman[54021]: 2025-12-05 08:05:14.329777489 +0000 UTC m=+0.063093628 container create a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:05:14 localhost systemd[1]: Started libpod-conmon-a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.scope. Dec 5 03:05:14 localhost systemd[1]: Started libcrun container. Dec 5 03:05:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6ced32e5cb1e91cad73934204d8f5cbbf79aafab7ac712dfe07034e18c0d6e/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 5 03:05:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6ced32e5cb1e91cad73934204d8f5cbbf79aafab7ac712dfe07034e18c0d6e/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 5 03:05:14 localhost podman[54021]: 2025-12-05 08:05:14.296640472 +0000 UTC m=+0.029956631 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 5 03:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:05:14 localhost podman[54021]: 2025-12-05 08:05:14.41259001 +0000 UTC m=+0.145906169 container init a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1) Dec 5 03:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:05:14 localhost podman[54021]: 2025-12-05 08:05:14.446717177 +0000 UTC m=+0.180033366 container start a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:05:14 localhost python3[53892]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ec38952662567c94fcd33f4598790a0c --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 5 03:05:14 localhost podman[54044]: 2025-12-05 08:05:14.546055026 +0000 UTC m=+0.094712777 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:05:14 localhost podman[54044]: 2025-12-05 08:05:14.728664471 +0000 UTC m=+0.277322222 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:05:14 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:05:14 localhost systemd[1]: var-lib-containers-storage-overlay-b49c0bfdfd68cdef0107452428014cc1318b2cf8507d683119189823e098b7e5-merged.mount: Deactivated successfully. Dec 5 03:05:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-767af5e361297d381608e32b51cbefa0c44548ee5a1e7d8a3b12b26cdbd724af-userdata-shm.mount: Deactivated successfully. Dec 5 03:05:14 localhost python3[54117]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:15 localhost python3[54133]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:05:15 localhost python3[54194]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764921915.2420318-85061-77613752128391/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:16 localhost python3[54210]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 03:05:16 localhost systemd[1]: Reloading. Dec 5 03:05:16 localhost systemd-rc-local-generator[54234]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:05:16 localhost systemd-sysv-generator[54238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:05:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:05:16 localhost python3[54262]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:05:16 localhost systemd[1]: Reloading. Dec 5 03:05:17 localhost systemd-sysv-generator[54291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:05:17 localhost systemd-rc-local-generator[54287]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:05:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:05:17 localhost systemd[1]: Starting metrics_qdr container... Dec 5 03:05:17 localhost systemd[1]: Started metrics_qdr container. Dec 5 03:05:17 localhost python3[54343]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:19 localhost python3[54464]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005546419 step=1 update_config_hash_only=False Dec 5 03:05:19 localhost python3[54480]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:05:19 localhost python3[54496]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 5 03:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:05:45 localhost podman[54497]: 2025-12-05 08:05:45.184838361 +0000 UTC m=+0.070520026 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:05:45 localhost podman[54497]: 2025-12-05 08:05:45.339765468 +0000 UTC m=+0.225447203 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:05:45 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:06:16 localhost podman[54603]: 2025-12-05 08:06:16.18240118 +0000 UTC m=+0.071466076 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 5 03:06:16 localhost podman[54603]: 2025-12-05 08:06:16.355584028 +0000 UTC m=+0.244648964 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:06:16 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:06:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:06:47 localhost podman[54631]: 2025-12-05 08:06:47.186922474 +0000 UTC m=+0.079116400 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Dec 5 03:06:47 localhost podman[54631]: 2025-12-05 08:06:47.382305973 +0000 UTC m=+0.274499909 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 03:06:47 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:07:17 localhost sshd[54738]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:07:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:07:18 localhost systemd[1]: tmp-crun.yH5VjD.mount: Deactivated successfully. Dec 5 03:07:18 localhost podman[54740]: 2025-12-05 08:07:18.229841505 +0000 UTC m=+0.119365347 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 03:07:18 localhost podman[54740]: 2025-12-05 08:07:18.439691378 +0000 UTC m=+0.329215220 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:07:18 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:07:19 localhost sshd[54769]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:07:22 localhost sshd[54771]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:07:24 localhost sshd[54773]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:07:25 localhost sshd[54775]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:07:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:07:49 localhost systemd[1]: tmp-crun.EIzpiZ.mount: Deactivated successfully. Dec 5 03:07:49 localhost podman[54777]: 2025-12-05 08:07:49.183433645 +0000 UTC m=+0.068710414 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:07:49 localhost podman[54777]: 2025-12-05 08:07:49.378114409 +0000 UTC m=+0.263391178 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1) Dec 5 03:07:49 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:08:20 localhost podman[54885]: 2025-12-05 08:08:20.185637474 +0000 UTC m=+0.070874951 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044) Dec 5 03:08:20 localhost podman[54885]: 2025-12-05 08:08:20.388685426 +0000 UTC m=+0.273922963 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:08:20 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:08:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:08:51 localhost podman[54915]: 2025-12-05 08:08:51.173698655 +0000 UTC m=+0.067937969 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com) Dec 5 03:08:51 localhost podman[54915]: 2025-12-05 08:08:51.377487431 +0000 UTC m=+0.271726715 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Dec 5 03:08:51 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:09:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:09:22 localhost systemd[1]: tmp-crun.xNTcCr.mount: Deactivated successfully. Dec 5 03:09:22 localhost podman[55021]: 2025-12-05 08:09:22.183436495 +0000 UTC m=+0.075805212 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, release=1761123044, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vendor=Red Hat, Inc.) Dec 5 03:09:22 localhost podman[55021]: 2025-12-05 08:09:22.401993707 +0000 UTC m=+0.294362414 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:09:22 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:09:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:09:53 localhost podman[55049]: 2025-12-05 08:09:53.157772689 +0000 UTC m=+0.051920024 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1761123044, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:09:53 localhost podman[55049]: 2025-12-05 08:09:53.329729982 +0000 UTC m=+0.223877337 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, distribution-scope=public, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:09:53 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:10:07 localhost ceph-osd[32336]: osd.3 pg_epoch: 22 pg[2.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2,1,3] r=2 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:09 localhost ceph-osd[31386]: osd.0 pg_epoch: 24 pg[3.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1,2,0] r=2 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:10 localhost ceph-osd[32336]: osd.3 pg_epoch: 26 pg[4.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [3,5,1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:12 localhost ceph-osd[32336]: osd.3 pg_epoch: 27 pg[4.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [3,5,1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:14 localhost ceph-osd[32336]: osd.3 pg_epoch: 28 pg[5.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [4,3,2] r=1 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:10:24 localhost systemd[1]: tmp-crun.W2SEHz.mount: Deactivated successfully. Dec 5 03:10:24 localhost podman[55157]: 2025-12-05 08:10:24.189932491 +0000 UTC m=+0.077487119 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:10:24 localhost podman[55157]: 2025-12-05 08:10:24.377665352 +0000 UTC m=+0.265219970 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 5 03:10:24 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:10:26 localhost ceph-osd[31386]: osd.0 pg_epoch: 34 pg[6.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0,5,1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:27 localhost ceph-osd[31386]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0,5,1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:30 localhost ceph-osd[32336]: osd.3 pg_epoch: 36 pg[7.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [5,1,3] r=2 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:37 localhost ceph-osd[31386]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=12.317423820s) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 active pruub 1124.094848633s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,0], acting [1,2,0] -> [1,2,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:37 localhost ceph-osd[31386]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=12.315114021s) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1124.094848633s@ mbc={}] state: transitioning to Stray Dec 5 03:10:37 localhost ceph-osd[32336]: osd.3 pg_epoch: 41 pg[2.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41 pruub=10.294002533s) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active pruub 1118.356811523s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,3], acting [2,1,3] -> [2,1,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:37 localhost ceph-osd[32336]: osd.3 pg_epoch: 41 pg[2.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41 pruub=10.292305946s) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1118.356811523s@ mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.8( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.4( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.7( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.2( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.5( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.d( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.e( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.10( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1c( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.11( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.14( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.13( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.16( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.18( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1a( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[31386]: osd.0 pg_epoch: 42 pg[3.19( empty local-lis/les=24/25 n=0 ec=41/24 lis/c=24/24 les/c/f=25/25/0 sis=41) [1,2,0] r=2 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.18( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.17( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.15( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.16( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.14( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.13( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.12( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.10( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.11( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.f( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.c( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.b( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.d( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.e( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.a( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.9( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.3( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.7( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.5( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.2( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.4( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.8( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.19( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1a( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1b( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1d( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1f( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1e( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.6( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:38 localhost ceph-osd[32336]: osd.3 pg_epoch: 42 pg[2.1c( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [2,1,3] r=2 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015143394s) [0,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814331055s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,1], acting [1,2,0] -> [0,2,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013551712s) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.812866211s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.19( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015143394s) [0,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.814331055s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013551712s) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.812866211s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013244629s) [2,4,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.812744141s@ mbc={}] start_peering_interval up [1,2,0] -> [2,4,0], acting [1,2,0] -> [2,4,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.14( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013134003s) [2,4,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.812744141s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012698174s) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.812377930s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,1], acting [1,2,0] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014106750s) [2,1,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.813842773s@ mbc={}] start_peering_interval up [1,2,0] -> [2,1,0], acting [1,2,0] -> [2,1,0], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012698174s) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.812377930s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014070511s) [2,1,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.813842773s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.016363144s) [0,5,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.816162109s@ mbc={}] start_peering_interval up [1,2,0] -> [0,5,4], acting [1,2,0] -> [0,5,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.016363144s) [0,5,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.816162109s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.016049385s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.816040039s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.016022682s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.816040039s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015479088s) [1,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.815551758s@ mbc={}] start_peering_interval up [1,2,0] -> [1,2,3], acting [1,2,0] -> [1,2,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015439034s) [1,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.815551758s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013146400s) [2,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.812988281s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,4], acting [1,2,0] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015986443s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.816040039s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013349533s) [2,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.813598633s@ mbc={}] start_peering_interval up [1,2,0] -> [2,3,1], acting [1,2,0] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.13( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013327599s) [2,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.813598633s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015703201s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.816040039s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015308380s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.815673828s@ mbc={}] start_peering_interval up [1,2,0] -> [4,2,3], acting [1,2,0] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.16( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012802124s) [2,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.812988281s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015279770s) [4,2,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.815673828s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015643120s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.816040039s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014542580s) [3,5,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814819336s@ mbc={}] start_peering_interval up [1,2,0] -> [3,5,1], acting [1,2,0] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015645027s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.816040039s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014349937s) [3,5,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.814819336s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014706612s) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.815307617s@ mbc={}] start_peering_interval up [1,2,0] -> [4,0,5], acting [1,2,0] -> [4,0,5], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015498161s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.816040039s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.7( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.015468597s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.816040039s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.8( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014669418s) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.815307617s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013867378s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814575195s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013723373s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814453125s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,2], acting [1,2,0] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.4( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013690948s) [3,1,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.814453125s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013867378s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.814575195s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013798714s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814575195s@ mbc={}] start_peering_interval up [1,2,0] -> [2,0,4], acting [1,2,0] -> [2,0,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013766289s) [2,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.814575195s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014346123s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.815307617s@ mbc={}] start_peering_interval up [1,2,0] -> [3,4,5], acting [1,2,0] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014075279s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814941406s@ mbc={}] start_peering_interval up [1,2,0] -> [0,2,4], acting [1,2,0] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014075279s) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.814941406s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.2( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014254570s) [3,4,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.815307617s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014903069s) [5,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.816040039s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,4], acting [1,2,0] -> [5,3,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012412071s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.813720703s@ mbc={}] start_peering_interval up [1,2,0] -> [5,3,1], acting [1,2,0] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.5( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014836311s) [5,3,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.816040039s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012381554s) [5,3,1] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.813720703s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012174606s) [1,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.813598633s@ mbc={}] start_peering_interval up [1,2,0] -> [1,5,0], acting [1,2,0] -> [1,5,0], acting_primary 1 -> 1, up_primary 1 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012593269s) [5,1,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814086914s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.e( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012153625s) [1,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.813598633s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011672974s) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.813354492s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,0], acting [1,2,0] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.d( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012560844s) [5,1,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.814086914s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.11( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011648178s) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.813354492s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012642860s) [5,1,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.814208984s@ mbc={}] start_peering_interval up [1,2,0] -> [5,1,3], acting [1,2,0] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.10( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.012563705s) [5,1,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.814208984s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011189461s) [3,1,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.812866211s@ mbc={}] start_peering_interval up [1,2,0] -> [3,1,5], acting [1,2,0] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.18( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.011166573s) [3,1,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.812866211s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013974190s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.815795898s@ mbc={}] start_peering_interval up [1,2,0] -> [4,3,2], acting [1,2,0] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1a( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013937950s) [4,3,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.815795898s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.014001846s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1128.815917969s@ mbc={}] start_peering_interval up [1,2,0] -> [4,5,3], acting [1,2,0] -> [4,5,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=41/42 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.013937950s) [4,5,3] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.815917969s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010813713s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091552734s@ mbc={}] start_peering_interval up [2,1,3] -> [0,2,4], acting [2,1,3] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010704994s) [0,2,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091552734s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010471344s) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091430664s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.010471344s) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.091430664s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001803398s) [3,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083007812s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001803398s) [3,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.083007812s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004553795s) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.086181641s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,2], acting [2,1,3] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004289627s) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.085815430s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,5], acting [2,1,3] -> [3,4,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.4( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004553795s) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.086181641s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004289627s) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.085815430s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003720284s) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.085693359s@ mbc={}] start_peering_interval up [2,1,3] -> [3,1,5], acting [2,1,3] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003720284s) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.085693359s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009052277s) [2,1,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091186523s@ mbc={}] start_peering_interval up [2,1,3] -> [2,1,0], acting [2,1,3] -> [2,1,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.009006500s) [2,1,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091186523s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.2( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003128052s) [3,5,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.085693359s@ mbc={}] start_peering_interval up [2,1,3] -> [3,5,4], acting [2,1,3] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003128052s) [3,5,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.085693359s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001691818s) [1,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.085693359s@ mbc={}] start_peering_interval up [2,1,3] -> [1,3,2], acting [2,1,3] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001646042s) [1,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.085693359s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.000684738s) [1,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.085693359s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,0], acting [2,1,3] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.000628471s) [1,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.085693359s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,5], acting [2,1,3] -> [1,0,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.000573158s) [1,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.085693359s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.000570297s) [1,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.085693359s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998769760s) [1,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.084228516s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998742104s) [3,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.084228516s@ mbc={}] start_peering_interval up [2,1,3] -> [3,4,2], acting [2,1,3] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998733521s) [1,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.084228516s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998742104s) [3,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.084228516s@ mbc={}] state: transitioning to Primary Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997999191s) [1,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083618164s@ mbc={}] start_peering_interval up [2,1,3] -> [1,5,3], acting [2,1,3] -> [1,5,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998076439s) [2,4,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083740234s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,0], acting [2,1,3] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998040199s) [2,4,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083740234s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998074532s) [1,0,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083862305s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998038292s) [1,0,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083862305s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997817039s) [4,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083862305s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,3], acting [2,1,3] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997775078s) [4,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083862305s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997839928s) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083862305s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997814178s) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083862305s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997404099s) [1,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083618164s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996911049s) [4,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083984375s@ mbc={}] start_peering_interval up [2,1,3] -> [4,3,2], acting [2,1,3] -> [4,3,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996834755s) [5,1,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083862305s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,0], acting [2,1,3] -> [5,1,0], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004206657s) [2,4,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091430664s@ mbc={}] start_peering_interval up [2,1,3] -> [2,4,3], acting [2,1,3] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996780396s) [5,1,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083862305s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996878624s) [4,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083984375s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996872902s) [4,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083984375s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,5], acting [2,1,3] -> [4,0,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.004163742s) [2,4,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091430664s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996821404s) [4,0,5] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083984375s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996652603s) [5,3,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.083984375s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,4], acting [2,1,3] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.996603966s) [5,3,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.083984375s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003927231s) [5,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091552734s@ mbc={}] start_peering_interval up [2,1,3] -> [5,3,1], acting [2,1,3] -> [5,3,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003894806s) [5,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091552734s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[4.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43 pruub=12.892807961s) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active pruub 1122.980590820s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,1], acting [3,5,1] -> [3,5,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998322487s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.086181641s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.998287201s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.086181641s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[5.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=15.686183929s) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.774169922s@ mbc={}] start_peering_interval up [4,3,2] -> [4,3,2], acting [4,3,2] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003662109s) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091674805s@ mbc={}] start_peering_interval up [2,1,3] -> [5,1,3], acting [2,1,3] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.003633499s) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091674805s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.002847672s) [5,0,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091064453s@ mbc={}] start_peering_interval up [2,1,3] -> [5,0,4], acting [2,1,3] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997961044s) [1,0,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.086181641s@ mbc={}] start_peering_interval up [2,1,3] -> [1,0,2], acting [2,1,3] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.002324104s) [5,0,4] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091064453s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=14.997200966s) [1,0,2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.086181641s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.002296448s) [1,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091552734s@ mbc={}] start_peering_interval up [2,1,3] -> [1,2,3], acting [2,1,3] -> [1,2,3], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.002267838s) [1,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091552734s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001770020s) [4,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091430664s@ mbc={}] start_peering_interval up [2,1,3] -> [4,5,0], acting [2,1,3] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001737595s) [4,5,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091430664s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001635551s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1125.091430664s@ mbc={}] start_peering_interval up [2,1,3] -> [4,2,0], acting [2,1,3] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=15.001589775s) [4,2,0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.091430664s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[5.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=43 pruub=15.682627678s) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.774169922s@ mbc={}] state: transitioning to Stray Dec 5 03:10:39 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[4.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43 pruub=12.892807961s) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.980590820s@ mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1e( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.10( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.11( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.11( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.10( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.12( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.13( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.13( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,0,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.12( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.15( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.14( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.16( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.17( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.14( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.17( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.8( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.c( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.9( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.8( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.9( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.b( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.16( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,0,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.15( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.14( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,4,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.a( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.d( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.f( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.10( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,0,5] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.c( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.7( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.4( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.5( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.6( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.2( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.6( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.3( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.7( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.5( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.2( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.4( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.3( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.e( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1e( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1f( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1d( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1c( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1c( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1d( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.f( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1b( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1a( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1b( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.1a( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.19( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.18( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[5.18( empty local-lis/les=28/29 n=0 ec=43/28 lis/c=28/28 les/c/f=29/29/0 sis=43) [4,3,2] r=1 lpr=43 pi=[28,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.19( empty local-lis/les=26/27 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.13( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,3,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2,1,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.1a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,3,2] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,1,0] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.d( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,0,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.a( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.c( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.10( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,1,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.1c( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [5,3,1] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [4,5,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 43 pg[3.1d( empty local-lis/les=0/0 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [1,2,3] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[3.19( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,5,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[31386]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [0,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[3.b( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,5,1] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[3.4( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,5,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,2,4] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[3.2( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,4,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=41/24 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [3,1,5] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:40 localhost ceph-osd[32336]: osd.3 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=26/26 les/c/f=27/27/0 sis=43) [3,5,1] r=0 lpr=43 pi=[26,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:41 localhost ceph-osd[31386]: osd.0 pg_epoch: 45 pg[6.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=10.351683617s) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 1126.396972656s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,1], acting [0,5,1] -> [0,5,1], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:41 localhost ceph-osd[31386]: osd.0 pg_epoch: 45 pg[6.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=10.351683617s) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.396972656s@ mbc={}] state: transitioning to Primary Dec 5 03:10:41 localhost ceph-osd[32336]: osd.3 pg_epoch: 45 pg[7.0( v 38'39 (0'0,38'39] local-lis/les=36/37 n=22 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=12.795085907s) [5,1,3] r=2 lpr=45 pi=[36,45)/1 luod=0'0 lua=38'37 crt=38'39 lcod 38'38 mlcod 0'0 active pruub 1125.122924805s@ mbc={}] start_peering_interval up [5,1,3] -> [5,1,3], acting [5,1,3] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:41 localhost ceph-osd[32336]: osd.3 pg_epoch: 45 pg[7.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=45 pruub=12.793397903s) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 lcod 38'38 mlcod 0'0 unknown NOTIFY pruub 1125.122924805s@ mbc={}] state: transitioning to Stray Dec 5 03:10:41 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.4 scrub starts Dec 5 03:10:42 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.19 scrub starts Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.12( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.10( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.16( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.5( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.9( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.a( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.19( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.18( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1a( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1b( empty local-lis/les=34/35 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.1( v 38'39 (0'0,38'39] local-lis/les=36/37 n=2 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[32336]: osd.3 pg_epoch: 46 pg[7.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=36/37 n=1 ec=45/36 lis/c=36/36 les/c/f=37/37/0 sis=45) [5,1,3] r=2 lpr=45 pi=[36,45)/1 crt=38'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.0( empty local-lis/les=45/46 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:42 localhost ceph-osd[31386]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=34/34 les/c/f=35/35/0 sis=45) [0,5,1] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:44 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.4 scrub ok Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.312142372s) [4,5,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636474609s@ mbc={}] start_peering_interval up [0,5,1] -> [4,5,3], acting [0,5,1] -> [4,5,3], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.a( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.310873032s) [1,3,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635375977s@ mbc={}] start_peering_interval up [0,5,1] -> [1,3,5], acting [0,5,1] -> [1,3,5], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.310673714s) [1,3,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635375977s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.311644554s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636596680s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.311572075s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636596680s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.312041283s) [4,5,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636474609s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.310725212s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636108398s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309302330s) [3,4,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.634765625s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,2], acting [0,5,1] -> [3,4,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309383392s) [0,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.634887695s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,2], acting [0,5,1] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309689522s) [4,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635131836s@ mbc={}] start_peering_interval up [0,5,1] -> [4,2,0], acting [0,5,1] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.310600281s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636108398s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309203148s) [3,4,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.634765625s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309624672s) [4,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635131836s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309383392s) [0,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.634887695s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309002876s) [0,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.634887695s@ mbc={}] start_peering_interval up [0,5,1] -> [0,5,4], acting [0,5,1] -> [0,5,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309146881s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635009766s@ mbc={}] start_peering_interval up [0,5,1] -> [3,1,2], acting [0,5,1] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308871269s) [1,0,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.634887695s@ mbc={}] start_peering_interval up [0,5,1] -> [1,0,2], acting [0,5,1] -> [1,0,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309002876s) [0,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.634887695s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308810234s) [1,0,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.634887695s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.309089661s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635009766s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308438301s) [3,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635009766s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,4], acting [0,5,1] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308358192s) [3,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635009766s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307695389s) [2,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635009766s@ mbc={}] start_peering_interval up [0,5,1] -> [2,4,0], acting [0,5,1] -> [2,4,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307648659s) [2,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635009766s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307476997s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635009766s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308835983s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636474609s@ mbc={}] start_peering_interval up [0,5,1] -> [2,1,3], acting [0,5,1] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307612419s) [0,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635253906s@ mbc={}] start_peering_interval up [0,5,1] -> [0,1,5], acting [0,5,1] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308769226s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636474609s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307379723s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635009766s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307612419s) [0,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.635253906s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307654381s) [5,0,4] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635375977s@ mbc={}] start_peering_interval up [0,5,1] -> [5,0,4], acting [0,5,1] -> [5,0,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307593346s) [5,0,4] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635375977s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307796478s) [5,1,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635864258s@ mbc={}] start_peering_interval up [0,5,1] -> [5,1,0], acting [0,5,1] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307736397s) [5,1,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635864258s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.308005333s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636230469s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307333946s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635620117s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,1], acting [0,5,1] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307602882s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635864258s@ mbc={}] start_peering_interval up [0,5,1] -> [3,5,1], acting [0,5,1] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307931900s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636230469s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307542801s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635864258s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307243347s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635620117s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307534218s) [1,5,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635986328s@ mbc={}] start_peering_interval up [0,5,1] -> [1,5,3], acting [0,5,1] -> [1,5,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307495117s) [1,5,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635986328s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307475090s) [5,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635986328s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307427406s) [5,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635986328s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307503700s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636230469s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.307458878s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636230469s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306790352s) [2,3,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635620117s@ mbc={}] start_peering_interval up [0,5,1] -> [2,3,1], acting [0,5,1] -> [2,3,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306727409s) [2,3,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635620117s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306806564s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635742188s@ mbc={}] start_peering_interval up [0,5,1] -> [3,4,5], acting [0,5,1] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.13( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525337219s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129028320s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525281906s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129028320s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.11( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524538994s) [3,4,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129028320s@ mbc={}] start_peering_interval up [4,3,2] -> [3,4,5], acting [4,3,2] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524538994s) [3,4,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.129028320s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524009705s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.128662109s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523921967s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.128662109s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306966782s) [3,2,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635986328s@ mbc={}] start_peering_interval up [0,5,1] -> [3,2,4], acting [0,5,1] -> [3,2,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.14( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306668282s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635742188s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306921959s) [3,2,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635986328s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306177139s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.635620117s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,4], acting [0,5,1] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306881905s) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636352539s@ mbc={}] start_peering_interval up [0,5,1] -> [0,2,4], acting [0,5,1] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306659698s) [5,3,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636108398s@ mbc={}] start_peering_interval up [0,5,1] -> [5,3,1], acting [0,5,1] -> [5,3,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524318695s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130126953s@ mbc={}] start_peering_interval up [4,3,2] -> [0,1,2], acting [4,3,2] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522808075s) [2,4,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.128662109s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,3], acting [4,3,2] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524279594s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130126953s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.531256676s) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137207031s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.531256676s) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.137207031s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522750854s) [2,4,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.128662109s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306042671s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.635620117s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306881905s) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.636352539s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.530091286s) [2,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.136230469s@ mbc={}] start_peering_interval up [4,3,2] -> [2,3,1], acting [4,3,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528956413s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.135131836s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.530050278s) [2,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.136230469s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523042679s) [0,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129394531s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,4], acting [4,3,2] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528910637s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.135131836s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522987366s) [0,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129394531s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.531265259s) [3,5,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137695312s@ mbc={}] start_peering_interval up [4,3,2] -> [3,5,4], acting [4,3,2] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306439400s) [5,3,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636108398s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306524277s) [1,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636230469s@ mbc={}] start_peering_interval up [0,5,1] -> [1,2,0], acting [0,5,1] -> [1,2,0], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523078918s) [3,1,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129638672s@ mbc={}] start_peering_interval up [4,3,2] -> [3,1,5], acting [4,3,2] -> [3,1,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.1d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523078918s) [3,1,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.129638672s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.531265259s) [3,5,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.137695312s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306487083s) [1,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636230469s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306409836s) [5,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1133.636230469s@ mbc={}] start_peering_interval up [0,5,1] -> [5,4,0], acting [0,5,1] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.306362152s) [5,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.636230469s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.1f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.529098511s) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137329102s@ mbc={}] start_peering_interval up [4,3,2] -> [3,2,4], acting [4,3,2] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.529734612s) [2,0,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137939453s@ mbc={}] start_peering_interval up [4,3,2] -> [2,0,4], acting [4,3,2] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.529098511s) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.137329102s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.529686928s) [2,0,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.137939453s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.526921272s) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.135253906s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.526882172s) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.135253906s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.527808189s) [2,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.136352539s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528867722s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137573242s@ mbc={}] start_peering_interval up [4,3,2] -> [0,5,1], acting [4,3,2] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.527675629s) [2,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.136352539s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.537956238s) [2,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.146728516s@ mbc={}] start_peering_interval up [4,3,2] -> [2,1,3], acting [4,3,2] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.537841797s) [2,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.146728516s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525420189s) [2,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.134399414s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,1], acting [3,5,1] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525383949s) [2,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.134399414s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525964737s) [2,3,4] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.135498047s@ mbc={}] start_peering_interval up [3,5,1] -> [2,3,4], acting [3,5,1] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525929451s) [2,3,4] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.135498047s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528396606s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137939453s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.527707100s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.137451172s@ mbc={}] start_peering_interval up [4,3,2] -> [2,4,0], acting [4,3,2] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528781891s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.137573242s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523706436s) [4,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.133300781s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.527676582s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.137451172s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523639679s) [4,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.133300781s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524649620s) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.134643555s@ mbc={}] start_peering_interval up [3,5,1] -> [1,3,2], acting [3,5,1] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524580002s) [1,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.134643555s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.302039146s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.912231445s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.528327942s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.137939453s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.536946297s) [5,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.147216797s@ mbc={}] start_peering_interval up [4,3,2] -> [5,1,3], acting [4,3,2] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523056030s) [4,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.133300781s@ mbc={}] start_peering_interval up [3,5,1] -> [4,5,3], acting [3,5,1] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301985741s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.912231445s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.536899567s) [5,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.147216797s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523009300s) [4,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.133300781s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522987366s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.133178711s@ mbc={}] start_peering_interval up [3,5,1] -> [2,4,0], acting [3,5,1] -> [2,4,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523664474s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.134155273s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522687912s) [2,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.133178711s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.1( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300527573s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911010742s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523617744s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.134155273s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522336960s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.132934570s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.1( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300487518s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.911010742s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522304535s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.132934570s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525101662s) [0,2,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.135864258s@ mbc={}] start_peering_interval up [4,3,2] -> [0,2,1], acting [4,3,2] -> [0,2,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525070190s) [0,2,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.135864258s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301162720s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911987305s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519636154s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130493164s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.301127434s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.911987305s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519598961s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130493164s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521654129s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.132568359s@ mbc={}] start_peering_interval up [3,5,1] -> [1,5,3], acting [3,5,1] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521615028s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.132568359s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525413513s) [5,0,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.136474609s@ mbc={}] start_peering_interval up [4,3,2] -> [5,0,1], acting [4,3,2] -> [5,0,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522170067s) [4,2,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.133300781s@ mbc={}] start_peering_interval up [3,5,1] -> [4,2,0], acting [3,5,1] -> [4,2,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525358200s) [5,0,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.136474609s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522133827s) [4,2,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.133300781s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300861359s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.912109375s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519157410s) [4,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130493164s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300820351s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.912109375s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518952370s) [5,3,4] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130371094s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,4], acting [4,3,2] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519115448s) [4,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130493164s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518921852s) [5,3,4] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130371094s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300169945s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911621094s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.300072670s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.911621094s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.525197983s) [1,3,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.136840820s@ mbc={}] start_peering_interval up [4,3,2] -> [1,3,5], acting [4,3,2] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.520123482s) [3,4,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.131958008s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,5], acting [3,5,1] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521434784s) [5,1,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.133056641s@ mbc={}] start_peering_interval up [3,5,1] -> [5,1,0], acting [3,5,1] -> [5,1,0], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.520123482s) [3,4,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.131958008s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521354675s) [5,1,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.133056641s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523674965s) [4,0,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.135620117s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,2], acting [4,3,2] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522247314s) [5,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.134399414s@ mbc={}] start_peering_interval up [3,5,1] -> [5,3,1], acting [3,5,1] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299977303s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911987305s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.524744034s) [1,3,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.136840820s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517670631s) [4,5,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129760742s@ mbc={}] start_peering_interval up [4,3,2] -> [4,5,0], acting [4,3,2] -> [4,5,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.522207260s) [5,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.134399414s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.299818039s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.911987305s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517628670s) [4,5,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129760742s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.523640633s) [4,0,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.135620117s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518718719s) [2,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.131103516s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517311096s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129638672s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517263412s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129638672s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518655777s) [2,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.131103516s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298663139s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911132812s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298606873s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.911132812s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518239975s) [1,2,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130981445s@ mbc={}] start_peering_interval up [3,5,1] -> [1,2,3], acting [3,5,1] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518208504s) [1,2,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130981445s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517045975s) [5,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129760742s@ mbc={}] start_peering_interval up [4,3,2] -> [5,4,0], acting [4,3,2] -> [5,4,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298798561s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911621094s@ mbc={}] start_peering_interval up [5,1,3] -> [2,1,3], acting [5,1,3] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516963959s) [5,4,0] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129760742s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517539024s) [2,0,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130493164s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,1], acting [3,5,1] -> [2,0,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516766548s) [1,0,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129760742s@ mbc={}] start_peering_interval up [3,5,1] -> [1,0,2], acting [3,5,1] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516732216s) [1,0,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129760742s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=13.298766136s) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1129.911621094s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516236305s) [1,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129394531s@ mbc={}] start_peering_interval up [4,3,2] -> [1,0,5], acting [4,3,2] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519732475s) [0,2,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.132934570s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,1], acting [3,5,1] -> [0,2,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516175270s) [1,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129394531s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517113686s) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130493164s@ mbc={}] start_peering_interval up [3,5,1] -> [3,2,4], acting [3,5,1] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.519663811s) [0,2,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.132934570s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517113686s) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.130493164s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515629768s) [5,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129150391s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515586853s) [5,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129150391s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518487930s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.132080078s@ mbc={}] start_peering_interval up [3,5,1] -> [4,0,5], acting [3,5,1] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515732765s) [5,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129272461s@ mbc={}] start_peering_interval up [4,3,2] -> [5,3,1], acting [4,3,2] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.518457413s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.132080078s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515696526s) [5,3,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129272461s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521084785s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.134887695s@ mbc={}] start_peering_interval up [3,5,1] -> [0,1,2], acting [3,5,1] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516244888s) [4,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130126953s@ mbc={}] start_peering_interval up [3,5,1] -> [4,3,2], acting [3,5,1] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521042824s) [0,1,2] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.134887695s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515245438s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129028320s@ mbc={}] start_peering_interval up [4,3,2] -> [4,0,5], acting [4,3,2] -> [4,0,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.516197205s) [4,3,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130126953s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.517495155s) [2,0,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130493164s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515388489s) [2,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129394531s@ mbc={}] start_peering_interval up [3,5,1] -> [2,1,3], acting [3,5,1] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515292168s) [2,1,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129394531s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514287949s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.128417969s@ mbc={}] start_peering_interval up [4,3,2] -> [1,5,3], acting [4,3,2] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.520545006s) [3,4,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.134643555s@ mbc={}] start_peering_interval up [3,5,1] -> [3,4,2], acting [3,5,1] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514246941s) [1,5,3] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.128417969s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.520545006s) [3,4,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.134643555s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515151978s) [4,0,5] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129028320s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521276474s) [3,5,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.135864258s@ mbc={}] start_peering_interval up [3,5,1] -> [3,5,4], acting [3,5,1] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.521276474s) [3,5,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1127.135864258s@ mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514621735s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.129394531s@ mbc={}] start_peering_interval up [3,5,1] -> [0,5,1], acting [3,5,1] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.514590263s) [0,5,1] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.129394531s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515143394s) [0,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active pruub 1127.130371094s@ mbc={}] start_peering_interval up [3,5,1] -> [0,2,4], acting [3,5,1] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=10.515087128s) [0,2,4] r=-1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1127.130371094s@ mbc={}] state: transitioning to Stray Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.b( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.16( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.1e( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:45 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts Dec 5 03:10:45 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [2,3,1] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,5,3] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.10( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,4,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,4,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,0,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,4,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.1c( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.1c( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,4,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.19( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,1] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.1b( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [2,0,4] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,4,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 47 pg[6.1e( empty local-lis/les=0/0 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,3] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,0,1] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [5,1,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,0,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.8( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [1,0,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,2,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.e( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,2] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.b( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,0,5] r=1 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 47 pg[5.d( empty local-lis/les=0/0 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [4,5,0] r=2 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.6( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.b( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.4( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[5.5( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[5.17( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,1,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[5.3( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[5.19( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.1d( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.1f( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[5.1d( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[5.6( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,5,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.c( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.f( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[6.10( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[4.1e( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,5,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[4.f( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,4,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[5.a( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[4.16( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,1] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,1,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.14( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[4.10( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,4,2] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[4.11( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,5,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[5.14( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,4,5] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.13( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[5.1e( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[6.11( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[4.17( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[4.b( empty local-lis/les=47/48 n=0 ec=43/26 lis/c=43/43 les/c/f=44/44/0 sis=47) [0,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[6.16( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[31386]: osd.0 pg_epoch: 48 pg[6.18( empty local-lis/les=47/48 n=0 ec=45/34 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:46 localhost ceph-osd[32336]: osd.3 pg_epoch: 48 pg[5.c( empty local-lis/les=47/48 n=0 ec=43/28 lis/c=43/43 les/c/f=44/44/0 sis=47) [3,2,4] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.210056305s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911010742s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.210056305s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1129.911010742s@ mbc={}] state: transitioning to Primary Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.209760666s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911010742s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.209760666s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1129.911010742s@ mbc={}] state: transitioning to Primary Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.2( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.210544586s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.911987305s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.2( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.210544586s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1129.911987305s@ mbc={}] state: transitioning to Primary Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.210897446s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1129.912353516s@ mbc={}] start_peering_interval up [5,1,3] -> [3,1,5], acting [5,1,3] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:47 localhost ceph-osd[32336]: osd.3 pg_epoch: 49 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=11.210897446s) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1129.912353516s@ mbc={}] state: transitioning to Primary Dec 5 03:10:48 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.6 scrub starts Dec 5 03:10:48 localhost ceph-osd[32336]: osd.3 pg_epoch: 50 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:48 localhost ceph-osd[32336]: osd.3 pg_epoch: 50 pg[7.2( v 38'39 (0'0,38'39] local-lis/les=49/50 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:48 localhost ceph-osd[32336]: osd.3 pg_epoch: 50 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:48 localhost ceph-osd[32336]: osd.3 pg_epoch: 50 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=49/50 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,1,5] r=0 lpr=49 pi=[45,49)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:10:50 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.12 scrub starts Dec 5 03:10:55 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.17 scrub starts Dec 5 03:10:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:10:55 localhost podman[55233]: 2025-12-05 08:10:55.191818717 +0000 UTC m=+0.080199981 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, release=1761123044, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:10:55 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.17 scrub ok Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.305279732s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1141.643432617s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.305279732s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1141.643432617s@ mbc={}] state: transitioning to Primary Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.305028915s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1141.643676758s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.305028915s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1141.643676758s@ mbc={}] state: transitioning to Primary Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.309253693s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1141.648437500s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.309253693s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1141.648437500s@ mbc={}] state: transitioning to Primary Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.304409981s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1141.643676758s@ mbc={}] start_peering_interval up [2,1,3] -> [3,2,4], acting [2,1,3] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:55 localhost ceph-osd[32336]: osd.3 pg_epoch: 51 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.304409981s) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1141.643676758s@ mbc={}] state: transitioning to Primary Dec 5 03:10:55 localhost podman[55233]: 2025-12-05 08:10:55.413726278 +0000 UTC m=+0.302107522 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4) Dec 5 03:10:55 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:10:56 localhost ceph-osd[32336]: osd.3 pg_epoch: 52 pg[7.b( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Dec 5 03:10:56 localhost ceph-osd[32336]: osd.3 pg_epoch: 52 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Dec 5 03:10:56 localhost ceph-osd[32336]: osd.3 pg_epoch: 52 pg[7.3( v 38'39 (0'0,38'39] local-lis/les=51/52 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=2}}] state: react AllReplicasActivated Activating complete Dec 5 03:10:56 localhost ceph-osd[32336]: osd.3 pg_epoch: 52 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,2,4] r=0 lpr=51 pi=[47,51)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Dec 5 03:10:57 localhost ceph-osd[31386]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,5,4] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:57 localhost ceph-osd[31386]: osd.0 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,5,4] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:10:57 localhost ceph-osd[32336]: osd.3 pg_epoch: 53 pg[7.4( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.494533539s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1137.912231445s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:57 localhost ceph-osd[32336]: osd.3 pg_epoch: 53 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.495122910s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1137.912841797s@ mbc={}] start_peering_interval up [5,1,3] -> [0,5,4], acting [5,1,3] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:57 localhost ceph-osd[32336]: osd.3 pg_epoch: 53 pg[7.4( v 38'39 (0'0,38'39] local-lis/les=45/46 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.494456291s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1137.912231445s@ mbc={}] state: transitioning to Stray Dec 5 03:10:57 localhost ceph-osd[32336]: osd.3 pg_epoch: 53 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=9.495064735s) [0,5,4] r=-1 lpr=53 pi=[45,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1137.912841797s@ mbc={}] state: transitioning to Stray Dec 5 03:10:58 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.1f scrub starts Dec 5 03:10:58 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.1f scrub ok Dec 5 03:10:58 localhost ceph-osd[31386]: osd.0 pg_epoch: 54 pg[7.4( v 38'39 lc 38'11 (0'0,38'39] local-lis/les=53/54 n=2 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,5,4] r=0 lpr=53 pi=[45,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(1+2)=4}}] state: react AllReplicasActivated Activating complete Dec 5 03:10:58 localhost ceph-osd[31386]: osd.0 pg_epoch: 54 pg[7.c( v 38'39 lc 38'12 (0'0,38'39] local-lis/les=53/54 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,5,4] r=0 lpr=53 pi=[45,53)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 5 03:10:59 localhost ceph-osd[32336]: osd.3 pg_epoch: 55 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.135243416s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1141.648315430s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:59 localhost ceph-osd[32336]: osd.3 pg_epoch: 55 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.134923935s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1141.648315430s@ mbc={}] state: transitioning to Stray Dec 5 03:10:59 localhost ceph-osd[32336]: osd.3 pg_epoch: 55 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.130157471s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1141.644042969s@ mbc={}] start_peering_interval up [2,1,3] -> [4,0,2], acting [2,1,3] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:10:59 localhost ceph-osd[32336]: osd.3 pg_epoch: 55 pg[7.5( v 38'39 (0'0,38'39] local-lis/les=47/48 n=2 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=11.130058289s) [4,0,2] r=-1 lpr=55 pi=[47,55)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1141.644042969s@ mbc={}] state: transitioning to Stray Dec 5 03:11:00 localhost ceph-osd[31386]: osd.0 pg_epoch: 55 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55) [4,0,2] r=1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:11:00 localhost ceph-osd[31386]: osd.0 pg_epoch: 55 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/50/0 sis=55) [4,0,2] r=1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:11:01 localhost ceph-osd[32336]: osd.3 pg_epoch: 57 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=49/50 n=2 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.301218987s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1143.865234375s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:01 localhost ceph-osd[32336]: osd.3 pg_epoch: 57 pg[7.6( v 38'39 (0'0,38'39] local-lis/les=49/50 n=2 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.301086426s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1143.865234375s@ mbc={}] state: transitioning to Stray Dec 5 03:11:01 localhost ceph-osd[32336]: osd.3 pg_epoch: 57 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.300191879s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1143.864501953s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:01 localhost ceph-osd[32336]: osd.3 pg_epoch: 57 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.300085068s) [0,2,4] r=-1 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1143.864501953s@ mbc={}] state: transitioning to Stray Dec 5 03:11:01 localhost ceph-osd[31386]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,2,4] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:11:01 localhost ceph-osd[31386]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,2,4] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:11:01 localhost python3[55277]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:03 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 2.1f scrub starts Dec 5 03:11:03 localhost ceph-osd[31386]: osd.0 pg_epoch: 58 pg[7.e( v 38'39 lc 38'13 (0'0,38'39] local-lis/les=57/58 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,2,4] r=0 lpr=57 pi=[49,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+3)=1}}] state: react AllReplicasActivated Activating complete Dec 5 03:11:03 localhost ceph-osd[31386]: osd.0 pg_epoch: 58 pg[7.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=57/58 n=2 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,2,4] r=0 lpr=57 pi=[49,57)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+3)=1}}] state: react AllReplicasActivated Activating complete Dec 5 03:11:03 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 2.1f scrub ok Dec 5 03:11:03 localhost python3[55293]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:03 localhost ceph-osd[32336]: osd.3 pg_epoch: 59 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.795305252s) [2,1,3] r=2 lpr=59 pi=[51,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1143.400878906s@ mbc={255={}}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:03 localhost ceph-osd[32336]: osd.3 pg_epoch: 59 pg[7.7( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.795213699s) [2,1,3] r=2 lpr=59 pi=[51,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1143.400878906s@ mbc={}] state: transitioning to Stray Dec 5 03:11:03 localhost ceph-osd[32336]: osd.3 pg_epoch: 59 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.794404030s) [2,1,3] r=2 lpr=59 pi=[51,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1143.400634766s@ mbc={255={}}] start_peering_interval up [3,2,4] -> [2,1,3], acting [3,2,4] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:03 localhost ceph-osd[32336]: osd.3 pg_epoch: 59 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=51/52 n=1 ec=45/36 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=8.794178963s) [2,1,3] r=2 lpr=59 pi=[51,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1143.400634766s@ mbc={}] state: transitioning to Stray Dec 5 03:11:03 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.1e scrub starts Dec 5 03:11:03 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.1e scrub ok Dec 5 03:11:04 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.6 scrub starts Dec 5 03:11:04 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.6 scrub ok Dec 5 03:11:05 localhost python3[55309]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:05 localhost ceph-osd[32336]: osd.3 pg_epoch: 61 pg[7.8( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=9.176913261s) [3,2,1] r=0 lpr=61 pi=[45,61)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1145.912109375s@ mbc={}] start_peering_interval up [5,1,3] -> [3,2,1], acting [5,1,3] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:05 localhost ceph-osd[32336]: osd.3 pg_epoch: 61 pg[7.8( v 38'39 (0'0,38'39] local-lis/les=45/46 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=9.176913261s) [3,2,1] r=0 lpr=61 pi=[45,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown pruub 1145.912109375s@ mbc={}] state: transitioning to Primary Dec 5 03:11:06 localhost ceph-osd[31386]: osd.0 pg_epoch: 62 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62) [0,4,2] r=0 lpr=62 pi=[47,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:11:06 localhost ceph-osd[32336]: osd.3 pg_epoch: 62 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62 pruub=11.906117439s) [0,4,2] r=-1 lpr=62 pi=[47,62)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1149.644165039s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,2], acting [2,1,3] -> [0,4,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:06 localhost ceph-osd[32336]: osd.3 pg_epoch: 62 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=47/48 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62 pruub=11.906057358s) [0,4,2] r=-1 lpr=62 pi=[47,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1149.644165039s@ mbc={}] state: transitioning to Stray Dec 5 03:11:06 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.19 scrub starts Dec 5 03:11:06 localhost ceph-osd[32336]: osd.3 pg_epoch: 62 pg[7.8( v 38'39 (0'0,38'39] local-lis/les=61/62 n=1 ec=45/36 lis/c=45/45 les/c/f=46/46/0 sis=61) [3,2,1] r=0 lpr=61 pi=[45,61)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:11:06 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.19 scrub ok Dec 5 03:11:07 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.0 scrub starts Dec 5 03:11:07 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.0 scrub ok Dec 5 03:11:07 localhost python3[55357]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:08 localhost ceph-osd[31386]: osd.0 pg_epoch: 63 pg[7.9( v 38'39 (0'0,38'39] local-lis/les=62/63 n=1 ec=45/36 lis/c=47/47 les/c/f=48/48/0 sis=62) [0,4,2] r=0 lpr=62 pi=[47,62)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 5 03:11:08 localhost python3[55400]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922267.4098492-92232-189191007386003/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=56b574bbcbb2378bafed25b3f279b3c007056bbe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:09 localhost ceph-osd[32336]: osd.3 pg_epoch: 64 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=11.554553986s) [4,0,5] r=-1 lpr=64 pi=[49,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1151.864746094s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:09 localhost ceph-osd[32336]: osd.3 pg_epoch: 64 pg[7.a( v 38'39 (0'0,38'39] local-lis/les=49/50 n=1 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=11.554499626s) [4,0,5] r=-1 lpr=64 pi=[49,64)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1151.864746094s@ mbc={}] state: transitioning to Stray Dec 5 03:11:10 localhost ceph-osd[31386]: osd.0 pg_epoch: 64 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=49/49 les/c/f=50/50/0 sis=64) [4,0,5] r=1 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:11:10 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.1 scrub starts Dec 5 03:11:10 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.1 scrub ok Dec 5 03:11:11 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts Dec 5 03:11:11 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok Dec 5 03:11:12 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.10 scrub starts Dec 5 03:11:12 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.10 scrub ok Dec 5 03:11:13 localhost python3[55462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:13 localhost ceph-osd[31386]: osd.0 pg_epoch: 67 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=45/36 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=9.111145020s) [2,3,4] r=-1 lpr=67 pi=[53,67)/1 crt=38'39 mlcod 0'0 active pruub 1157.231201172s@ mbc={255={}}] start_peering_interval up [0,5,4] -> [2,3,4], acting [0,5,4] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:13 localhost ceph-osd[31386]: osd.0 pg_epoch: 67 pg[7.c( v 38'39 (0'0,38'39] local-lis/les=53/54 n=1 ec=45/36 lis/c=53/53 les/c/f=54/54/0 sis=67 pruub=9.111074448s) [2,3,4] r=-1 lpr=67 pi=[53,67)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1157.231201172s@ mbc={}] state: transitioning to Stray Dec 5 03:11:13 localhost python3[55505]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922272.8775566-92232-167586621478823/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=81b1e70c98aa594608eafceac10d1e7c5fcc2dc9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:13 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.9 scrub starts Dec 5 03:11:13 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 2.9 scrub ok Dec 5 03:11:14 localhost ceph-osd[32336]: osd.3 pg_epoch: 67 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=53/53 les/c/f=54/54/0 sis=67) [2,3,4] r=1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:11:15 localhost ceph-osd[31386]: osd.0 pg_epoch: 69 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=45/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=9.474135399s) [2,3,1] r=-1 lpr=69 pi=[55,69)/1 luod=0'0 crt=38'39 mlcod 0'0 active pruub 1159.639038086s@ mbc={}] start_peering_interval up [4,0,2] -> [2,3,1], acting [4,0,2] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:15 localhost ceph-osd[31386]: osd.0 pg_epoch: 69 pg[7.d( v 38'39 (0'0,38'39] local-lis/les=55/56 n=1 ec=45/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=9.474056244s) [2,3,1] r=-1 lpr=69 pi=[55,69)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1159.639038086s@ mbc={}] state: transitioning to Stray Dec 5 03:11:16 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.9 scrub starts Dec 5 03:11:16 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.9 scrub ok Dec 5 03:11:16 localhost ceph-osd[32336]: osd.3 pg_epoch: 69 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=55/55 les/c/f=56/56/0 sis=69) [2,3,1] r=1 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 5 03:11:16 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.7 scrub starts Dec 5 03:11:16 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.7 scrub ok Dec 5 03:11:17 localhost ceph-osd[31386]: osd.0 pg_epoch: 71 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=9.580117226s) [3,1,5] r=-1 lpr=71 pi=[57,71)/1 crt=38'39 mlcod 38'39 active pruub 1161.763427734s@ mbc={255={}}] start_peering_interval up [0,2,4] -> [3,1,5], acting [0,2,4] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:17 localhost ceph-osd[31386]: osd.0 pg_epoch: 71 pg[7.e( v 38'39 (0'0,38'39] local-lis/les=57/58 n=1 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71 pruub=9.580049515s) [3,1,5] r=-1 lpr=71 pi=[57,71)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 1161.763427734s@ mbc={}] state: transitioning to Stray Dec 5 03:11:17 localhost ceph-osd[32336]: osd.3 pg_epoch: 71 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71) [3,1,5] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:11:18 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts Dec 5 03:11:18 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok Dec 5 03:11:18 localhost python3[55567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:18 localhost ceph-osd[32336]: osd.3 pg_epoch: 72 pg[7.e( v 38'39 lc 38'13 (0'0,38'39] local-lis/les=71/72 n=1 ec=45/36 lis/c=57/57 les/c/f=58/58/0 sis=71) [3,1,5] r=0 lpr=71 pi=[57,71)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+3)=1}}] state: react AllReplicasActivated Activating complete Dec 5 03:11:18 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.4 scrub starts Dec 5 03:11:18 localhost python3[55610]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922278.2619936-92232-121566297401831/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=31a82f9bde3ef47ca8b17ff1e2177aab5748b36a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:19 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.4 scrub ok Dec 5 03:11:19 localhost ceph-osd[31386]: osd.0 pg_epoch: 73 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73) [0,4,5] r=0 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 5 03:11:19 localhost ceph-osd[32336]: osd.3 pg_epoch: 73 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=9.649514198s) [0,4,5] r=-1 lpr=73 pi=[59,73)/1 luod=0'0 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 1160.168457031s@ mbc={}] start_peering_interval up [2,1,3] -> [0,4,5], acting [2,1,3] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 5 03:11:19 localhost ceph-osd[32336]: osd.3 pg_epoch: 73 pg[7.f( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=9.649347305s) [0,4,5] r=-1 lpr=73 pi=[59,73)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1160.168457031s@ mbc={}] state: transitioning to Stray Dec 5 03:11:20 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.18 scrub starts Dec 5 03:11:20 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.18 scrub ok Dec 5 03:11:20 localhost ceph-osd[31386]: osd.0 pg_epoch: 74 pg[7.f( v 38'39 lc 38'1 (0'0,38'39] local-lis/les=73/74 n=1 ec=45/36 lis/c=59/59 les/c/f=60/60/0 sis=73) [0,4,5] r=0 lpr=73 pi=[59,73)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+3)=3}}] state: react AllReplicasActivated Activating complete Dec 5 03:11:21 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.4 scrub starts Dec 5 03:11:21 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.4 scrub ok Dec 5 03:11:24 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.1e scrub starts Dec 5 03:11:24 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.1e scrub ok Dec 5 03:11:25 localhost python3[55672]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:25 localhost python3[55717]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922284.7001348-92804-41606688595687/source _original_basename=tmpmn7d6bno follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:11:26 localhost systemd[1]: tmp-crun.gJElon.mount: Deactivated successfully. Dec 5 03:11:26 localhost podman[55732]: 2025-12-05 08:11:26.196308248 +0000 UTC m=+0.089071515 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z) Dec 5 03:11:26 localhost podman[55732]: 2025-12-05 08:11:26.381713437 +0000 UTC m=+0.274476694 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 5 03:11:26 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:11:26 localhost python3[55808]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:26 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.1e scrub starts Dec 5 03:11:26 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.1e scrub ok Dec 5 03:11:27 localhost python3[55851]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922286.3275573-92887-7029784339918/source _original_basename=tmp126f_w8r follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:27 localhost python3[55881]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Dec 5 03:11:27 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.2 scrub starts Dec 5 03:11:27 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.2 scrub ok Dec 5 03:11:27 localhost python3[55899]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:11:27 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.16 scrub starts Dec 5 03:11:28 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts Dec 5 03:11:28 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok Dec 5 03:11:29 localhost ansible-async_wrapper.py[56071]: Invoked with 485333110722 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922288.9552515-93002-71929628015596/AnsiballZ_command.py _ Dec 5 03:11:29 localhost ansible-async_wrapper.py[56074]: Starting module and watcher Dec 5 03:11:29 localhost ansible-async_wrapper.py[56074]: Start watching 56075 (3600) Dec 5 03:11:29 localhost ansible-async_wrapper.py[56075]: Start module (56075) Dec 5 03:11:29 localhost ansible-async_wrapper.py[56071]: Return async_wrapper task started. Dec 5 03:11:29 localhost python3[56095]: ansible-ansible.legacy.async_status Invoked with jid=485333110722.56071 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:11:32 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.0 scrub starts Dec 5 03:11:32 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.0 scrub ok Dec 5 03:11:32 localhost puppet-user[56094]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:11:32 localhost puppet-user[56094]: (file: /etc/puppet/hiera.yaml) Dec 5 03:11:32 localhost puppet-user[56094]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:11:32 localhost puppet-user[56094]: (file & line not available) Dec 5 03:11:32 localhost puppet-user[56094]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:11:32 localhost puppet-user[56094]: (file & line not available) Dec 5 03:11:32 localhost puppet-user[56094]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 5 03:11:33 localhost puppet-user[56094]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 5 03:11:33 localhost puppet-user[56094]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.15 seconds Dec 5 03:11:33 localhost puppet-user[56094]: Notice: Applied catalog in 0.63 seconds Dec 5 03:11:33 localhost puppet-user[56094]: Application: Dec 5 03:11:33 localhost puppet-user[56094]: Initial environment: production Dec 5 03:11:33 localhost puppet-user[56094]: Converged environment: production Dec 5 03:11:33 localhost puppet-user[56094]: Run mode: user Dec 5 03:11:33 localhost puppet-user[56094]: Changes: Dec 5 03:11:33 localhost puppet-user[56094]: Events: Dec 5 03:11:33 localhost puppet-user[56094]: Resources: Dec 5 03:11:33 localhost puppet-user[56094]: Total: 10 Dec 5 03:11:33 localhost puppet-user[56094]: Time: Dec 5 03:11:33 localhost puppet-user[56094]: Schedule: 0.00 Dec 5 03:11:33 localhost puppet-user[56094]: File: 0.00 Dec 5 03:11:33 localhost puppet-user[56094]: Exec: 0.01 Dec 5 03:11:33 localhost puppet-user[56094]: Augeas: 0.01 Dec 5 03:11:33 localhost puppet-user[56094]: Transaction evaluation: 0.03 Dec 5 03:11:33 localhost puppet-user[56094]: Config retrieval: 0.18 Dec 5 03:11:33 localhost puppet-user[56094]: Catalog application: 0.63 Dec 5 03:11:33 localhost puppet-user[56094]: Last run: 1764922293 Dec 5 03:11:33 localhost puppet-user[56094]: Filebucket: 0.00 Dec 5 03:11:33 localhost puppet-user[56094]: Total: 0.63 Dec 5 03:11:33 localhost puppet-user[56094]: Version: Dec 5 03:11:33 localhost puppet-user[56094]: Config: 1764922292 Dec 5 03:11:33 localhost puppet-user[56094]: Puppet: 7.10.0 Dec 5 03:11:33 localhost ansible-async_wrapper.py[56075]: Module complete (56075) Dec 5 03:11:34 localhost ansible-async_wrapper.py[56074]: Done in kid B. Dec 5 03:11:34 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts Dec 5 03:11:34 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok Dec 5 03:11:35 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.b scrub starts Dec 5 03:11:35 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.b scrub ok Dec 5 03:11:35 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.12 scrub starts Dec 5 03:11:36 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.12 scrub ok Dec 5 03:11:36 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.4 scrub starts Dec 5 03:11:36 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.4 scrub ok Dec 5 03:11:36 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.1e scrub starts Dec 5 03:11:37 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.1e scrub ok Dec 5 03:11:37 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.6 scrub starts Dec 5 03:11:37 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.6 scrub ok Dec 5 03:11:38 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.16 scrub starts Dec 5 03:11:38 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.16 scrub ok Dec 5 03:11:40 localhost python3[56297]: ansible-ansible.legacy.async_status Invoked with jid=485333110722.56071 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:11:40 localhost python3[56313]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:11:41 localhost python3[56329]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:11:41 localhost python3[56379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:41 localhost python3[56397]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpvub17ukp recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:11:41 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.1d scrub starts Dec 5 03:11:41 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.1d scrub ok Dec 5 03:11:42 localhost python3[56427]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:43 localhost python3[56530]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 5 03:11:44 localhost python3[56549]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:44 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts Dec 5 03:11:44 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok Dec 5 03:11:45 localhost python3[56581]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:11:45 localhost sshd[56584]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:11:45 localhost python3[56633]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:45 localhost python3[56651]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:46 localhost python3[56713]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:46 localhost python3[56731]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:47 localhost python3[56793]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:47 localhost python3[56811]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:48 localhost python3[56873]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:48 localhost python3[56891]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:48 localhost python3[56921]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:11:48 localhost systemd[1]: Reloading. Dec 5 03:11:48 localhost systemd-sysv-generator[56948]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:11:49 localhost systemd-rc-local-generator[56944]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:11:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:11:49 localhost python3[57007]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:49 localhost python3[57025]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:50 localhost python3[57087]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:11:50 localhost python3[57105]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:11:51 localhost python3[57135]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:11:51 localhost systemd[1]: Reloading. Dec 5 03:11:51 localhost systemd-rc-local-generator[57157]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:11:51 localhost systemd-sysv-generator[57160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:11:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:11:51 localhost systemd[1]: Starting Create netns directory... Dec 5 03:11:51 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 03:11:51 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 03:11:51 localhost systemd[1]: Finished Create netns directory. Dec 5 03:11:52 localhost python3[57192]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 5 03:11:53 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.17 scrub starts Dec 5 03:11:53 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.17 scrub ok Dec 5 03:11:54 localhost python3[57250]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 5 03:11:54 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.b scrub starts Dec 5 03:11:54 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.b scrub ok Dec 5 03:11:54 localhost podman[57320]: 2025-12-05 08:11:54.992187349 +0000 UTC m=+0.066084969 container create 16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 5 03:11:55 localhost podman[57321]: 2025-12-05 08:11:55.0117712 +0000 UTC m=+0.075554190 container create 328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_virtqemud_init_logs, architecture=x86_64, config_id=tripleo_step2, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, name=rhosp17/openstack-nova-libvirt) Dec 5 03:11:55 localhost systemd[1]: Started libpod-conmon-16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938.scope. Dec 5 03:11:55 localhost systemd[1]: Started libpod-conmon-328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5.scope. Dec 5 03:11:55 localhost systemd[1]: Started libcrun container. Dec 5 03:11:55 localhost podman[57320]: 2025-12-05 08:11:54.955831413 +0000 UTC m=+0.029729083 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:11:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf56fbbce213fdffb99cfd7c4b14a7fe4a3462a569e4b67279c5a13f896f3b08/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:11:55 localhost systemd[1]: Started libcrun container. Dec 5 03:11:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70e2d626f72923f97a05d5a086369b52c966ad776df02819c13c4b803d6acdf/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 5 03:11:55 localhost podman[57320]: 2025-12-05 08:11:55.062220118 +0000 UTC m=+0.136117748 container init 16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step2, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute_init_log, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Dec 5 03:11:55 localhost podman[57321]: 2025-12-05 08:11:54.966610334 +0000 UTC m=+0.030393344 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:11:55 localhost podman[57321]: 2025-12-05 08:11:55.068279634 +0000 UTC m=+0.132062644 container init 328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, container_name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container) Dec 5 03:11:55 localhost podman[57321]: 2025-12-05 08:11:55.073925527 +0000 UTC m=+0.137708517 container start 328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_virtqemud_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z) Dec 5 03:11:55 localhost python3[57250]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Dec 5 03:11:55 localhost systemd[1]: libpod-16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938.scope: Deactivated successfully. Dec 5 03:11:55 localhost systemd[1]: libpod-328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5.scope: Deactivated successfully. Dec 5 03:11:55 localhost podman[57320]: 2025-12-05 08:11:55.17371704 +0000 UTC m=+0.247614670 container start 16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, container_name=nova_compute_init_log, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Dec 5 03:11:55 localhost python3[57250]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Dec 5 03:11:55 localhost podman[57360]: 2025-12-05 08:11:55.183318364 +0000 UTC m=+0.090998483 container died 16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, name=rhosp17/openstack-nova-compute, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_compute_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step2, batch=17.1_20251118.1, vcs-type=git, release=1761123044, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:11:55 localhost podman[57360]: 2025-12-05 08:11:55.20467268 +0000 UTC m=+0.112352779 container cleanup 16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step2, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=nova_compute_init_log, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:11:55 localhost systemd[1]: libpod-conmon-16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938.scope: Deactivated successfully. Dec 5 03:11:55 localhost podman[57359]: 2025-12-05 08:11:55.227602293 +0000 UTC m=+0.140820082 container died 328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_virtqemud_init_logs, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2) Dec 5 03:11:55 localhost podman[57361]: 2025-12-05 08:11:55.354287861 +0000 UTC m=+0.261104884 container cleanup 328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step2, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtqemud_init_logs, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:11:55 localhost systemd[1]: libpod-conmon-328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5.scope: Deactivated successfully. Dec 5 03:11:55 localhost podman[57506]: 2025-12-05 08:11:55.561463238 +0000 UTC m=+0.076426926 container create b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, container_name=create_virtlogd_wrapper, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:11:55 localhost podman[57507]: 2025-12-05 08:11:55.587588631 +0000 UTC m=+0.092945824 container create 60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step2, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=create_haproxy_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true) Dec 5 03:11:55 localhost systemd[1]: Started libpod-conmon-b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e.scope. Dec 5 03:11:55 localhost systemd[1]: Started libcrun container. Dec 5 03:11:55 localhost systemd[1]: Started libpod-conmon-60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a.scope. Dec 5 03:11:55 localhost podman[57506]: 2025-12-05 08:11:55.520798621 +0000 UTC m=+0.035762299 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:11:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b95431b1478a17506a6e7089f072573540986d5218a9d3dfea91f5817bd1ba9b/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 5 03:11:55 localhost podman[57507]: 2025-12-05 08:11:55.533334945 +0000 UTC m=+0.038692188 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 5 03:11:55 localhost podman[57506]: 2025-12-05 08:11:55.633595632 +0000 UTC m=+0.148559310 container init b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step2, container_name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z) Dec 5 03:11:55 localhost podman[57506]: 2025-12-05 08:11:55.639758331 +0000 UTC m=+0.154721979 container start b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, config_id=tripleo_step2, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=create_virtlogd_wrapper, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, distribution-scope=public) Dec 5 03:11:55 localhost podman[57506]: 2025-12-05 08:11:55.639948207 +0000 UTC m=+0.154911915 container attach b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_virtlogd_wrapper, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=1761123044) Dec 5 03:11:55 localhost systemd[1]: Started libcrun container. Dec 5 03:11:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ef0e92ae5a4c8a141a5a4f63d7b99ab996c091b6eadea29d58846d2b7054c3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 03:11:55 localhost podman[57507]: 2025-12-05 08:11:55.657177725 +0000 UTC m=+0.162534918 container init 60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:11:55 localhost podman[57507]: 2025-12-05 08:11:55.664510301 +0000 UTC m=+0.169867494 container start 60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=create_haproxy_wrapper, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64) Dec 5 03:11:55 localhost podman[57507]: 2025-12-05 08:11:55.665596144 +0000 UTC m=+0.170953337 container attach 60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible) Dec 5 03:11:55 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.18 scrub starts Dec 5 03:11:55 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 6.18 scrub ok Dec 5 03:11:56 localhost systemd[1]: var-lib-containers-storage-overlay-c70e2d626f72923f97a05d5a086369b52c966ad776df02819c13c4b803d6acdf-merged.mount: Deactivated successfully. Dec 5 03:11:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-328542083ffde593dd2269a68c5094b7cd6f9b53e3e6280301444d0e78d519a5-userdata-shm.mount: Deactivated successfully. Dec 5 03:11:56 localhost systemd[1]: var-lib-containers-storage-overlay-cf56fbbce213fdffb99cfd7c4b14a7fe4a3462a569e4b67279c5a13f896f3b08-merged.mount: Deactivated successfully. Dec 5 03:11:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16e2150aff3984d4ff8841519cce8259db4ca5b2bf326892cacb54546cc5a938-userdata-shm.mount: Deactivated successfully. Dec 5 03:11:56 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.14 scrub starts Dec 5 03:11:56 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.14 scrub ok Dec 5 03:11:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:11:57 localhost systemd[1]: tmp-crun.FIuqqO.mount: Deactivated successfully. Dec 5 03:11:57 localhost podman[57585]: 2025-12-05 08:11:57.194270914 +0000 UTC m=+0.086383492 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, build-date=2025-11-18T22:49:46Z, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:11:57 localhost podman[57585]: 2025-12-05 08:11:57.3886944 +0000 UTC m=+0.280806938 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12) Dec 5 03:11:57 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:11:57 localhost ovs-vsctl[57666]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 5 03:11:57 localhost systemd[1]: libpod-b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e.scope: Deactivated successfully. Dec 5 03:11:57 localhost systemd[1]: libpod-b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e.scope: Consumed 2.165s CPU time. Dec 5 03:11:57 localhost podman[57506]: 2025-12-05 08:11:57.827827396 +0000 UTC m=+2.342791054 container died b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step2) Dec 5 03:11:57 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.10 scrub starts Dec 5 03:11:57 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.5 scrub starts Dec 5 03:11:57 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.10 scrub ok Dec 5 03:11:57 localhost podman[57788]: 2025-12-05 08:11:57.918442218 +0000 UTC m=+0.079384708 container cleanup b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step2, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_virtlogd_wrapper, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public) Dec 5 03:11:57 localhost systemd[1]: libpod-conmon-b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e.scope: Deactivated successfully. Dec 5 03:11:57 localhost python3[57250]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Dec 5 03:11:58 localhost systemd[1]: tmp-crun.LVNVN4.mount: Deactivated successfully. Dec 5 03:11:58 localhost systemd[1]: var-lib-containers-storage-overlay-b95431b1478a17506a6e7089f072573540986d5218a9d3dfea91f5817bd1ba9b-merged.mount: Deactivated successfully. Dec 5 03:11:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b35c868526b36f838f7a090b3924c8dc990c8353e0f37eec073022746f7d785e-userdata-shm.mount: Deactivated successfully. Dec 5 03:11:59 localhost systemd[1]: libpod-60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a.scope: Deactivated successfully. Dec 5 03:11:59 localhost systemd[1]: libpod-60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a.scope: Consumed 2.120s CPU time. Dec 5 03:11:59 localhost podman[57507]: 2025-12-05 08:11:59.167322832 +0000 UTC m=+3.672679995 container died 60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step2, build-date=2025-11-19T00:14:25Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:11:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a-userdata-shm.mount: Deactivated successfully. Dec 5 03:11:59 localhost podman[57830]: 2025-12-05 08:11:59.256063695 +0000 UTC m=+0.075226920 container cleanup 60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step2, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Dec 5 03:11:59 localhost systemd[1]: libpod-conmon-60d1c21dfcd45fffdb6759d865d86b864d1012adb4afbc3fceedfd362d28070a.scope: Deactivated successfully. Dec 5 03:11:59 localhost python3[57250]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Dec 5 03:11:59 localhost python3[57881]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:12:00 localhost systemd[1]: var-lib-containers-storage-overlay-3ef0e92ae5a4c8a141a5a4f63d7b99ab996c091b6eadea29d58846d2b7054c3f-merged.mount: Deactivated successfully. Dec 5 03:12:01 localhost python3[58002]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005546419 step=2 update_config_hash_only=False Dec 5 03:12:01 localhost python3[58018]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:12:02 localhost python3[58034]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 5 03:12:02 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.19 scrub starts Dec 5 03:12:03 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.19 scrub ok Dec 5 03:12:03 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.3 scrub starts Dec 5 03:12:03 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.3 scrub ok Dec 5 03:12:03 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.f scrub starts Dec 5 03:12:04 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.f scrub ok Dec 5 03:12:04 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.4 scrub starts Dec 5 03:12:04 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.4 scrub ok Dec 5 03:12:04 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.f scrub starts Dec 5 03:12:05 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.f scrub ok Dec 5 03:12:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:12:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4394 writes, 20K keys, 4394 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4394 writes, 427 syncs, 10.29 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1135 writes, 4041 keys, 1135 commit groups, 1.0 writes per commit group, ingest: 2.02 MB, 0.00 MB/s#012Interval WAL: 1135 writes, 282 syncs, 4.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Dec 5 03:12:06 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.6 scrub starts Dec 5 03:12:06 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.6 scrub ok Dec 5 03:12:08 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.9 scrub starts Dec 5 03:12:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:12:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5070 writes, 22K keys, 5070 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5070 writes, 546 syncs, 9.29 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1675 writes, 5961 keys, 1675 commit groups, 1.0 writes per commit group, ingest: 2.54 MB, 0.00 MB/s#012Interval WAL: 1675 writes, 345 syncs, 4.86 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me Dec 5 03:12:09 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.9 scrub ok Dec 5 03:12:10 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts Dec 5 03:12:10 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok Dec 5 03:12:14 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.f scrub starts Dec 5 03:12:15 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 7.f scrub ok Dec 5 03:12:16 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts Dec 5 03:12:16 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok Dec 5 03:12:17 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.c scrub starts Dec 5 03:12:17 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.c scrub ok Dec 5 03:12:22 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.19 scrub starts Dec 5 03:12:22 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.19 scrub ok Dec 5 03:12:27 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts Dec 5 03:12:27 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok Dec 5 03:12:27 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.13 scrub starts Dec 5 03:12:27 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.13 scrub ok Dec 5 03:12:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:12:28 localhost podman[58035]: 2025-12-05 08:12:28.171321801 +0000 UTC m=+0.060225349 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public) Dec 5 03:12:28 localhost podman[58035]: 2025-12-05 08:12:28.389316635 +0000 UTC m=+0.278220233 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 5 03:12:28 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:12:29 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.17 scrub starts Dec 5 03:12:29 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.17 scrub ok Dec 5 03:12:30 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.12 scrub starts Dec 5 03:12:30 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 3.12 scrub ok Dec 5 03:12:31 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.16 scrub starts Dec 5 03:12:31 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 4.16 scrub ok Dec 5 03:12:31 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.14 scrub starts Dec 5 03:12:31 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.14 scrub ok Dec 5 03:12:32 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.5 scrub starts Dec 5 03:12:32 localhost ceph-osd[31386]: log_channel(cluster) log [DBG] : 5.5 scrub ok Dec 5 03:12:34 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.1d scrub starts Dec 5 03:12:34 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.1d scrub ok Dec 5 03:12:37 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.6 scrub starts Dec 5 03:12:37 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 5.6 scrub ok Dec 5 03:12:38 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.1f scrub starts Dec 5 03:12:38 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 6.1f scrub ok Dec 5 03:12:44 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts Dec 5 03:12:44 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok Dec 5 03:12:48 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.3 scrub starts Dec 5 03:12:48 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.3 scrub ok Dec 5 03:12:50 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.b scrub starts Dec 5 03:12:50 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.b scrub ok Dec 5 03:12:51 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts Dec 5 03:12:51 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok Dec 5 03:12:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:12:59 localhost podman[58194]: 2025-12-05 08:12:59.186047365 +0000 UTC m=+0.073399885 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 5 03:12:59 localhost podman[58194]: 2025-12-05 08:12:59.387751116 +0000 UTC m=+0.275103636 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4) Dec 5 03:12:59 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:12:59 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.e scrub starts Dec 5 03:12:59 localhost ceph-osd[32336]: log_channel(cluster) log [DBG] : 7.e scrub ok Dec 5 03:13:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:13:30 localhost podman[58223]: 2025-12-05 08:13:30.197782317 +0000 UTC m=+0.085953502 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 5 03:13:30 localhost podman[58223]: 2025-12-05 08:13:30.396674452 +0000 UTC m=+0.284845557 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, container_name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Dec 5 03:13:30 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:14:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:14:01 localhost podman[58330]: 2025-12-05 08:14:01.179841003 +0000 UTC m=+0.067930967 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, container_name=metrics_qdr, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 5 03:14:01 localhost podman[58330]: 2025-12-05 08:14:01.330615033 +0000 UTC m=+0.218705017 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:14:01 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:14:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:14:32 localhost podman[58359]: 2025-12-05 08:14:32.191575042 +0000 UTC m=+0.075736237 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:14:32 localhost podman[58359]: 2025-12-05 08:14:32.395663746 +0000 UTC m=+0.279825021 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:14:32 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:14:41 localhost podman[58487]: 2025-12-05 08:14:41.471221898 +0000 UTC m=+0.092657569 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, name=rhceph) Dec 5 03:14:41 localhost podman[58487]: 2025-12-05 08:14:41.581629714 +0000 UTC m=+0.203065395 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 03:15:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:15:03 localhost podman[58631]: 2025-12-05 08:15:03.184127744 +0000 UTC m=+0.076023426 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Dec 5 03:15:03 localhost podman[58631]: 2025-12-05 08:15:03.375666341 +0000 UTC m=+0.267562123 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 5 03:15:03 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:15:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:15:34 localhost systemd[1]: tmp-crun.AIp5D0.mount: Deactivated successfully. Dec 5 03:15:34 localhost podman[58660]: 2025-12-05 08:15:34.188806713 +0000 UTC m=+0.079679449 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 5 03:15:34 localhost podman[58660]: 2025-12-05 08:15:34.377639737 +0000 UTC m=+0.268512453 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:15:34 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:16:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:16:05 localhost podman[58766]: 2025-12-05 08:16:05.199995019 +0000 UTC m=+0.085997002 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public) Dec 5 03:16:05 localhost podman[58766]: 2025-12-05 08:16:05.394673674 +0000 UTC m=+0.280675647 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044) Dec 5 03:16:05 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:16:31 localhost python3[58841]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:31 localhost python3[58886]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922590.992796-99147-144885750914634/source _original_basename=tmpyhc1b0ho follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:32 localhost python3[58916]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:16:34 localhost ansible-async_wrapper.py[59088]: Invoked with 244527619803 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922594.3735542-99354-156468905177238/AnsiballZ_command.py _ Dec 5 03:16:34 localhost ansible-async_wrapper.py[59091]: Starting module and watcher Dec 5 03:16:34 localhost ansible-async_wrapper.py[59091]: Start watching 59092 (3600) Dec 5 03:16:34 localhost ansible-async_wrapper.py[59092]: Start module (59092) Dec 5 03:16:34 localhost ansible-async_wrapper.py[59088]: Return async_wrapper task started. Dec 5 03:16:35 localhost python3[59112]: ansible-ansible.legacy.async_status Invoked with jid=244527619803.59088 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:16:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:16:36 localhost podman[59126]: 2025-12-05 08:16:36.184590377 +0000 UTC m=+0.068877954 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com) Dec 5 03:16:36 localhost podman[59126]: 2025-12-05 08:16:36.39775334 +0000 UTC m=+0.282040907 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step1) Dec 5 03:16:36 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:16:38 localhost puppet-user[59111]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:16:38 localhost puppet-user[59111]: (file: /etc/puppet/hiera.yaml) Dec 5 03:16:38 localhost puppet-user[59111]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:16:38 localhost puppet-user[59111]: (file & line not available) Dec 5 03:16:38 localhost puppet-user[59111]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:16:38 localhost puppet-user[59111]: (file & line not available) Dec 5 03:16:38 localhost puppet-user[59111]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 5 03:16:38 localhost puppet-user[59111]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 5 03:16:38 localhost puppet-user[59111]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.13 seconds Dec 5 03:16:38 localhost puppet-user[59111]: Notice: Applied catalog in 0.04 seconds Dec 5 03:16:38 localhost puppet-user[59111]: Application: Dec 5 03:16:38 localhost puppet-user[59111]: Initial environment: production Dec 5 03:16:38 localhost puppet-user[59111]: Converged environment: production Dec 5 03:16:38 localhost puppet-user[59111]: Run mode: user Dec 5 03:16:38 localhost puppet-user[59111]: Changes: Dec 5 03:16:38 localhost puppet-user[59111]: Events: Dec 5 03:16:38 localhost puppet-user[59111]: Resources: Dec 5 03:16:38 localhost puppet-user[59111]: Total: 10 Dec 5 03:16:38 localhost puppet-user[59111]: Time: Dec 5 03:16:38 localhost puppet-user[59111]: Schedule: 0.00 Dec 5 03:16:38 localhost puppet-user[59111]: File: 0.00 Dec 5 03:16:38 localhost puppet-user[59111]: Exec: 0.01 Dec 5 03:16:38 localhost puppet-user[59111]: Augeas: 0.01 Dec 5 03:16:38 localhost puppet-user[59111]: Transaction evaluation: 0.03 Dec 5 03:16:38 localhost puppet-user[59111]: Catalog application: 0.04 Dec 5 03:16:38 localhost puppet-user[59111]: Config retrieval: 0.17 Dec 5 03:16:38 localhost puppet-user[59111]: Last run: 1764922598 Dec 5 03:16:38 localhost puppet-user[59111]: Filebucket: 0.00 Dec 5 03:16:38 localhost puppet-user[59111]: Total: 0.05 Dec 5 03:16:38 localhost puppet-user[59111]: Version: Dec 5 03:16:38 localhost puppet-user[59111]: Config: 1764922598 Dec 5 03:16:38 localhost puppet-user[59111]: Puppet: 7.10.0 Dec 5 03:16:38 localhost ansible-async_wrapper.py[59092]: Module complete (59092) Dec 5 03:16:39 localhost ansible-async_wrapper.py[59091]: Done in kid B. Dec 5 03:16:45 localhost python3[59320]: ansible-ansible.legacy.async_status Invoked with jid=244527619803.59088 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:16:46 localhost python3[59362]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:16:46 localhost python3[59379]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:16:47 localhost python3[59429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:47 localhost python3[59447]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmptqzls_n7 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:16:47 localhost python3[59477]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:48 localhost python3[59580]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 5 03:16:49 localhost python3[59599]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:51 localhost python3[59631]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:16:51 localhost python3[59681]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:52 localhost python3[59699]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:52 localhost python3[59761]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:52 localhost python3[59779]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:53 localhost python3[59841]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:53 localhost python3[59859]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:54 localhost python3[59921]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:54 localhost python3[59939]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:54 localhost python3[59969]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:16:54 localhost systemd[1]: Reloading. Dec 5 03:16:54 localhost systemd-rc-local-generator[59995]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:16:54 localhost systemd-sysv-generator[59998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:16:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:16:55 localhost python3[60054]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:55 localhost python3[60072]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:56 localhost python3[60134]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:16:56 localhost python3[60152]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:16:57 localhost python3[60182]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:16:57 localhost systemd[1]: Reloading. Dec 5 03:16:57 localhost systemd-rc-local-generator[60204]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:16:57 localhost systemd-sysv-generator[60211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:16:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:16:57 localhost systemd[1]: Starting Create netns directory... Dec 5 03:16:57 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 03:16:57 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 03:16:57 localhost systemd[1]: Finished Create netns directory. Dec 5 03:16:58 localhost python3[60241]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 5 03:16:59 localhost python3[60299]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 5 03:17:00 localhost podman[60456]: 2025-12-05 08:17:00.199921339 +0000 UTC m=+0.070226166 container create 392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, container_name=ceilometer_init_log, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z) Dec 5 03:17:00 localhost podman[60462]: 2025-12-05 08:17:00.226566332 +0000 UTC m=+0.089979433 container create 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step3) Dec 5 03:17:00 localhost systemd[1]: Started libpod-conmon-392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10.scope. Dec 5 03:17:00 localhost systemd[1]: Started libcrun container. Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d27338a6571241dc544b413c9c3e27fcc131a61029ed5e1eb15e0498d76d9b53/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost systemd[1]: Started libpod-conmon-720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3.scope. Dec 5 03:17:00 localhost podman[60469]: 2025-12-05 08:17:00.254679011 +0000 UTC m=+0.115034697 container create 1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_statedir_owner, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 5 03:17:00 localhost systemd[1]: Started libcrun container. Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost podman[60464]: 2025-12-05 08:17:00.260966167 +0000 UTC m=+0.124049738 container create 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost podman[60456]: 2025-12-05 08:17:00.164604016 +0000 UTC m=+0.034908853 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 5 03:17:00 localhost podman[60464]: 2025-12-05 08:17:00.169306513 +0000 UTC m=+0.032390114 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 5 03:17:00 localhost podman[60462]: 2025-12-05 08:17:00.268611196 +0000 UTC m=+0.132024287 container init 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, version=17.1.12, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:17:00 localhost podman[60462]: 2025-12-05 08:17:00.171548492 +0000 UTC m=+0.034961613 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:00 localhost podman[60469]: 2025-12-05 08:17:00.177085985 +0000 UTC m=+0.037441691 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:17:00 localhost podman[60462]: 2025-12-05 08:17:00.279021602 +0000 UTC m=+0.142434683 container start 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 5 03:17:00 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:00 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:00 localhost systemd[1]: Created slice User Slice of UID 0. Dec 5 03:17:00 localhost podman[60456]: 2025-12-05 08:17:00.316322128 +0000 UTC m=+0.186626965 container init 392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_init_log, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:17:00 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 5 03:17:00 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 5 03:17:00 localhost podman[60456]: 2025-12-05 08:17:00.335465826 +0000 UTC m=+0.205770663 container start 392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, version=17.1.12, container_name=ceilometer_init_log) Dec 5 03:17:00 localhost systemd[1]: Starting User Manager for UID 0... Dec 5 03:17:00 localhost systemd[1]: libpod-392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10.scope: Deactivated successfully. Dec 5 03:17:00 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Dec 5 03:17:00 localhost podman[60560]: 2025-12-05 08:17:00.388101321 +0000 UTC m=+0.035476140 container died 392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, container_name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4) Dec 5 03:17:00 localhost systemd[1]: Started libpod-conmon-1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75.scope. Dec 5 03:17:00 localhost systemd[1]: Started libpod-conmon-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope. Dec 5 03:17:00 localhost podman[60505]: 2025-12-05 08:17:00.40086931 +0000 UTC m=+0.222649940 container create 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:17:00 localhost systemd[1]: Started libcrun container. Dec 5 03:17:00 localhost podman[60505]: 2025-12-05 08:17:00.309074431 +0000 UTC m=+0.130855091 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 5 03:17:00 localhost systemd[1]: Started libcrun container. Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8bfb5b25170f278f645f2806cccd4bdbad52378102da6f5fcd488f01787ac/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8bfb5b25170f278f645f2806cccd4bdbad52378102da6f5fcd488f01787ac/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed8bfb5b25170f278f645f2806cccd4bdbad52378102da6f5fcd488f01787ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost systemd[1]: Started libpod-conmon-33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.scope. Dec 5 03:17:00 localhost podman[60464]: 2025-12-05 08:17:00.423939281 +0000 UTC m=+0.287022842 container init 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:17:00 localhost systemd[1]: Started libcrun container. Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a19e404f6c13e423229a7127c9516226f0251bfb1ca4d8f5c0d5e27c8d5a3/merged/scripts supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72a19e404f6c13e423229a7127c9516226f0251bfb1ca4d8f5c0d5e27c8d5a3/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost podman[60464]: 2025-12-05 08:17:00.432086935 +0000 UTC m=+0.295170496 container start 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=) Dec 5 03:17:00 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=7903f5b91b4ea11c31e5f2ba22a27dd3 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 5 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:17:00 localhost podman[60505]: 2025-12-05 08:17:00.454354021 +0000 UTC m=+0.276134681 container init 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 5 03:17:00 localhost podman[60560]: 2025-12-05 08:17:00.467961367 +0000 UTC m=+0.115336176 container cleanup 392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:17:00 localhost podman[60469]: 2025-12-05 08:17:00.473141989 +0000 UTC m=+0.333497665 container init 1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_statedir_owner, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 5 03:17:00 localhost systemd[60553]: Queued start job for default target Main User Target. Dec 5 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:17:00 localhost systemd[1]: libpod-conmon-392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10.scope: Deactivated successfully. Dec 5 03:17:00 localhost systemd[60553]: Created slice User Application Slice. Dec 5 03:17:00 localhost systemd[60553]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 5 03:17:00 localhost podman[60505]: 2025-12-05 08:17:00.481849221 +0000 UTC m=+0.303629861 container start 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 5 03:17:00 localhost systemd[60553]: Started Daily Cleanup of User's Temporary Directories. Dec 5 03:17:00 localhost systemd[60553]: Reached target Paths. Dec 5 03:17:00 localhost systemd[60553]: Reached target Timers. Dec 5 03:17:00 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=da9a0dc7b40588672419e3ce10063e21 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 5 03:17:00 localhost systemd[60553]: Starting D-Bus User Message Bus Socket... Dec 5 03:17:00 localhost systemd[60553]: Starting Create User's Volatile Files and Directories... Dec 5 03:17:00 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:00 localhost podman[60469]: 2025-12-05 08:17:00.491789352 +0000 UTC m=+0.352145038 container start 1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_statedir_owner, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:17:00 localhost podman[60469]: 2025-12-05 08:17:00.492140573 +0000 UTC m=+0.352496269 container attach 1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:17:00 localhost systemd[60553]: Finished Create User's Volatile Files and Directories. Dec 5 03:17:00 localhost systemd[60553]: Listening on D-Bus User Message Bus Socket. Dec 5 03:17:00 localhost systemd[60553]: Reached target Sockets. Dec 5 03:17:00 localhost systemd[60553]: Reached target Basic System. Dec 5 03:17:00 localhost systemd[1]: Started User Manager for UID 0. Dec 5 03:17:00 localhost systemd[60553]: Reached target Main User Target. Dec 5 03:17:00 localhost systemd[60553]: Startup finished in 143ms. Dec 5 03:17:00 localhost systemd[1]: Started Session c1 of User root. Dec 5 03:17:00 localhost systemd[1]: Started Session c2 of User root. Dec 5 03:17:00 localhost systemd[1]: libpod-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:00 localhost systemd[1]: libpod-1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75.scope: Deactivated successfully. Dec 5 03:17:00 localhost systemd[1]: session-c2.scope: Deactivated successfully. Dec 5 03:17:00 localhost systemd[1]: session-c1.scope: Deactivated successfully. Dec 5 03:17:00 localhost podman[60678]: 2025-12-05 08:17:00.601209972 +0000 UTC m=+0.054011200 container died 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, container_name=rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z) Dec 5 03:17:00 localhost podman[60678]: 2025-12-05 08:17:00.628205655 +0000 UTC m=+0.081006863 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=rsyslog, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:17:00 localhost systemd[1]: libpod-conmon-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:00 localhost podman[60637]: 2025-12-05 08:17:00.681611424 +0000 UTC m=+0.187112918 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Dec 5 03:17:00 localhost podman[60469]: 2025-12-05 08:17:00.701449235 +0000 UTC m=+0.561804911 container died 1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner) Dec 5 03:17:00 localhost podman[60637]: 2025-12-05 08:17:00.714324547 +0000 UTC m=+0.219826031 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, container_name=collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:17:00 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:17:00 localhost podman[60787]: 2025-12-05 08:17:00.825731999 +0000 UTC m=+0.066663244 container create 1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044) Dec 5 03:17:00 localhost podman[60689]: 2025-12-05 08:17:00.830487068 +0000 UTC m=+0.273442878 container cleanup 1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_statedir_owner, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}) Dec 5 03:17:00 localhost systemd[1]: libpod-conmon-1f54155650ccd7aec0205e4ea9bd47b1aeee1762f3475a8026ccc9bd7ef2fd75.scope: Deactivated successfully. Dec 5 03:17:00 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Dec 5 03:17:00 localhost podman[60787]: 2025-12-05 08:17:00.789816227 +0000 UTC m=+0.030747482 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:00 localhost systemd[1]: Started libpod-conmon-1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f.scope. Dec 5 03:17:00 localhost systemd[1]: Started libcrun container. Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1470ee8f543baf54d5cf653d97a785841a6023aa58707b8ab81be849108da325/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1470ee8f543baf54d5cf653d97a785841a6023aa58707b8ab81be849108da325/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1470ee8f543baf54d5cf653d97a785841a6023aa58707b8ab81be849108da325/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1470ee8f543baf54d5cf653d97a785841a6023aa58707b8ab81be849108da325/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:00 localhost podman[60787]: 2025-12-05 08:17:00.948363193 +0000 UTC m=+0.189294418 container init 1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:35:22Z) Dec 5 03:17:00 localhost podman[60787]: 2025-12-05 08:17:00.954137243 +0000 UTC m=+0.195068458 container start 1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:17:01 localhost systemd[1]: var-lib-containers-storage-overlay-d27338a6571241dc544b413c9c3e27fcc131a61029ed5e1eb15e0498d76d9b53-merged.mount: Deactivated successfully. Dec 5 03:17:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-392180c94a4027ea8a8e6ef28ca2ed84c8bf1594892bb56a77c3b1370e0a2e10-userdata-shm.mount: Deactivated successfully. Dec 5 03:17:01 localhost podman[60868]: 2025-12-05 08:17:01.311885654 +0000 UTC m=+0.072298080 container create 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, container_name=nova_virtsecretd, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, architecture=x86_64, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 5 03:17:01 localhost systemd[1]: Started libpod-conmon-37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634.scope. Dec 5 03:17:01 localhost systemd[1]: Started libcrun container. Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost podman[60868]: 2025-12-05 08:17:01.270876103 +0000 UTC m=+0.031288569 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost podman[60868]: 2025-12-05 08:17:01.378144435 +0000 UTC m=+0.138556841 container init 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, container_name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:17:01 localhost podman[60868]: 2025-12-05 08:17:01.386126415 +0000 UTC m=+0.146538821 container start 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, container_name=nova_virtsecretd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, release=1761123044, url=https://www.redhat.com) Dec 5 03:17:01 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:01 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:01 localhost systemd[1]: Started Session c3 of User root. Dec 5 03:17:01 localhost systemd[1]: session-c3.scope: Deactivated successfully. Dec 5 03:17:01 localhost podman[61004]: 2025-12-05 08:17:01.800914609 +0000 UTC m=+0.063219658 container create 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, container_name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=) Dec 5 03:17:01 localhost podman[61010]: 2025-12-05 08:17:01.834339854 +0000 UTC m=+0.079652442 container create 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 5 03:17:01 localhost systemd[1]: Started libpod-conmon-178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.scope. Dec 5 03:17:01 localhost systemd[1]: Started libcrun container. Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11caa1cf464c6437cbae9400b1833815a57da3d6bae8a8f7a0e3f04c2c78d35b/merged/etc/target supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11caa1cf464c6437cbae9400b1833815a57da3d6bae8a8f7a0e3f04c2c78d35b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost systemd[1]: Started libpod-conmon-6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895.scope. Dec 5 03:17:01 localhost podman[61004]: 2025-12-05 08:17:01.763377666 +0000 UTC m=+0.025682805 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 5 03:17:01 localhost systemd[1]: Started libcrun container. Dec 5 03:17:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:17:01 localhost podman[61004]: 2025-12-05 08:17:01.875137638 +0000 UTC m=+0.137442707 container init 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:01 localhost podman[61010]: 2025-12-05 08:17:01.881386654 +0000 UTC m=+0.126699222 container init 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 5 03:17:01 localhost podman[61010]: 2025-12-05 08:17:01.888667471 +0000 UTC m=+0.133980049 container start 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, container_name=nova_virtnodedevd, build-date=2025-11-19T00:35:22Z, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step3) Dec 5 03:17:01 localhost podman[61010]: 2025-12-05 08:17:01.793884309 +0000 UTC m=+0.039196887 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:01 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:17:01 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:01 localhost podman[61004]: 2025-12-05 08:17:01.90015698 +0000 UTC m=+0.162462049 container start 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, release=1761123044, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:17:01 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=c48ee961a201e2ecc5561337e7450232 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 5 03:17:01 localhost systemd[1]: Started Session c4 of User root. Dec 5 03:17:01 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:01 localhost systemd[1]: Started Session c5 of User root. Dec 5 03:17:01 localhost systemd[1]: session-c4.scope: Deactivated successfully. Dec 5 03:17:01 localhost podman[61046]: 2025-12-05 08:17:01.991347331 +0000 UTC m=+0.085710330 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12) Dec 5 03:17:02 localhost kernel: Loading iSCSI transport class v2.0-870. Dec 5 03:17:02 localhost podman[61046]: 2025-12-05 08:17:02.004930776 +0000 UTC m=+0.099293775 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3) Dec 5 03:17:02 localhost systemd[1]: session-c5.scope: Deactivated successfully. Dec 5 03:17:02 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:17:02 localhost podman[61176]: 2025-12-05 08:17:02.361896362 +0000 UTC m=+0.080245938 container create 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 03:17:02 localhost podman[61176]: 2025-12-05 08:17:02.314633265 +0000 UTC m=+0.032982891 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:02 localhost systemd[1]: Started libpod-conmon-77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8.scope. Dec 5 03:17:02 localhost systemd[1]: Started libcrun container. Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost podman[61176]: 2025-12-05 08:17:02.455655603 +0000 UTC m=+0.174005139 container init 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtstoraged, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:17:02 localhost podman[61176]: 2025-12-05 08:17:02.465043196 +0000 UTC m=+0.183392732 container start 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtstoraged, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Dec 5 03:17:02 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:02 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:02 localhost systemd[1]: Started Session c6 of User root. Dec 5 03:17:02 localhost systemd[1]: session-c6.scope: Deactivated successfully. Dec 5 03:17:02 localhost podman[61276]: 2025-12-05 08:17:02.916382433 +0000 UTC m=+0.082594872 container create 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:17:02 localhost systemd[1]: Started libpod-conmon-86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479.scope. Dec 5 03:17:02 localhost systemd[1]: Started libcrun container. Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:02 localhost podman[61276]: 2025-12-05 08:17:02.882405081 +0000 UTC m=+0.048617590 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:02 localhost podman[61276]: 2025-12-05 08:17:02.983089378 +0000 UTC m=+0.149301857 container init 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud, vcs-type=git) Dec 5 03:17:02 localhost podman[61276]: 2025-12-05 08:17:02.991200071 +0000 UTC m=+0.157412550 container start 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, vcs-type=git) Dec 5 03:17:02 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:03 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:03 localhost systemd[1]: Started Session c7 of User root. Dec 5 03:17:03 localhost systemd[1]: session-c7.scope: Deactivated successfully. Dec 5 03:17:03 localhost podman[61380]: 2025-12-05 08:17:03.506448626 +0000 UTC m=+0.077254066 container create 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_virtproxyd, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z) Dec 5 03:17:03 localhost systemd[1]: Started libpod-conmon-0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8.scope. Dec 5 03:17:03 localhost systemd[1]: Started libcrun container. Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost podman[61380]: 2025-12-05 08:17:03.467364234 +0000 UTC m=+0.038169664 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:03 localhost podman[61380]: 2025-12-05 08:17:03.576742853 +0000 UTC m=+0.147548283 container init 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 5 03:17:03 localhost podman[61380]: 2025-12-05 08:17:03.58626845 +0000 UTC m=+0.157073850 container start 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, architecture=x86_64, container_name=nova_virtproxyd, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 5 03:17:03 localhost python3[60299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:17:03 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:17:03 localhost systemd[1]: Started Session c8 of User root. Dec 5 03:17:03 localhost systemd[1]: session-c8.scope: Deactivated successfully. Dec 5 03:17:04 localhost python3[61461]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:04 localhost python3[61477]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:04 localhost python3[61493]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:04 localhost python3[61510]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:05 localhost python3[61526]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:05 localhost python3[61542]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:05 localhost python3[61558]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:05 localhost python3[61574]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:06 localhost python3[61590]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:06 localhost python3[61606]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:17:06 localhost systemd[1]: tmp-crun.uoIN6h.mount: Deactivated successfully. Dec 5 03:17:06 localhost podman[61623]: 2025-12-05 08:17:06.693139606 +0000 UTC m=+0.113182818 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, container_name=metrics_qdr, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:17:06 localhost python3[61622]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:06 localhost podman[61623]: 2025-12-05 08:17:06.878452328 +0000 UTC m=+0.298495480 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:17:06 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:17:06 localhost python3[61666]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:07 localhost python3[61682]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:07 localhost python3[61698]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:07 localhost python3[61714]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:07 localhost python3[61730]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:08 localhost python3[61746]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:08 localhost python3[61762]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:17:09 localhost python3[61823]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:09 localhost python3[61852]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:10 localhost python3[61881]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:10 localhost python3[61910]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:11 localhost python3[61939]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:11 localhost python3[61968]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:12 localhost python3[61997]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:12 localhost python3[62026]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:13 localhost python3[62055]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922628.4764078-100564-167267955638530/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:13 localhost python3[62071]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 03:17:13 localhost systemd[1]: Reloading. Dec 5 03:17:13 localhost systemd-sysv-generator[62095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:13 localhost systemd-rc-local-generator[62091]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:13 localhost systemd[1]: Stopping User Manager for UID 0... Dec 5 03:17:13 localhost systemd[60553]: Activating special unit Exit the Session... Dec 5 03:17:13 localhost systemd[60553]: Stopped target Main User Target. Dec 5 03:17:13 localhost systemd[60553]: Stopped target Basic System. Dec 5 03:17:13 localhost systemd[60553]: Stopped target Paths. Dec 5 03:17:13 localhost systemd[60553]: Stopped target Sockets. Dec 5 03:17:13 localhost systemd[60553]: Stopped target Timers. Dec 5 03:17:13 localhost systemd[60553]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 03:17:13 localhost systemd[60553]: Closed D-Bus User Message Bus Socket. Dec 5 03:17:13 localhost systemd[60553]: Stopped Create User's Volatile Files and Directories. Dec 5 03:17:13 localhost systemd[60553]: Removed slice User Application Slice. Dec 5 03:17:13 localhost systemd[60553]: Reached target Shutdown. Dec 5 03:17:13 localhost systemd[60553]: Finished Exit the Session. Dec 5 03:17:13 localhost systemd[60553]: Reached target Exit the Session. Dec 5 03:17:13 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 5 03:17:13 localhost systemd[1]: Stopped User Manager for UID 0. Dec 5 03:17:13 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 5 03:17:13 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 5 03:17:13 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 5 03:17:13 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 5 03:17:13 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 5 03:17:14 localhost python3[62123]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:14 localhost systemd[1]: Reloading. Dec 5 03:17:14 localhost systemd-sysv-generator[62151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:14 localhost systemd-rc-local-generator[62147]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:14 localhost systemd[1]: Starting collectd container... Dec 5 03:17:14 localhost systemd[1]: Started collectd container. Dec 5 03:17:15 localhost python3[62189]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:15 localhost systemd[1]: Reloading. Dec 5 03:17:15 localhost systemd-sysv-generator[62217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:15 localhost systemd-rc-local-generator[62214]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:15 localhost systemd[1]: Starting iscsid container... Dec 5 03:17:15 localhost systemd[1]: Started iscsid container. Dec 5 03:17:16 localhost python3[62256]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:16 localhost systemd[1]: Reloading. Dec 5 03:17:16 localhost systemd-rc-local-generator[62280]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:16 localhost systemd-sysv-generator[62286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:16 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Dec 5 03:17:16 localhost systemd[1]: Started nova_virtlogd_wrapper container. Dec 5 03:17:17 localhost python3[62323]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:17 localhost systemd[1]: Reloading. Dec 5 03:17:17 localhost systemd-rc-local-generator[62353]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:17 localhost systemd-sysv-generator[62357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:17 localhost systemd[1]: Starting nova_virtnodedevd container... Dec 5 03:17:18 localhost tripleo-start-podman-container[62364]: Creating additional drop-in dependency for "nova_virtnodedevd" (6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895) Dec 5 03:17:18 localhost systemd[1]: Reloading. Dec 5 03:17:18 localhost systemd-rc-local-generator[62417]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:18 localhost systemd-sysv-generator[62422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:18 localhost systemd[1]: Started nova_virtnodedevd container. Dec 5 03:17:18 localhost python3[62447]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:18 localhost systemd[1]: Reloading. Dec 5 03:17:19 localhost systemd-sysv-generator[62475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:19 localhost systemd-rc-local-generator[62471]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:19 localhost systemd[1]: Starting nova_virtproxyd container... Dec 5 03:17:19 localhost tripleo-start-podman-container[62486]: Creating additional drop-in dependency for "nova_virtproxyd" (0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8) Dec 5 03:17:19 localhost systemd[1]: Reloading. Dec 5 03:17:19 localhost systemd-sysv-generator[62544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:19 localhost systemd-rc-local-generator[62540]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:19 localhost systemd[1]: Started nova_virtproxyd container. Dec 5 03:17:20 localhost python3[62569]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:20 localhost systemd[1]: Reloading. Dec 5 03:17:20 localhost systemd-rc-local-generator[62596]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:20 localhost systemd-sysv-generator[62601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:20 localhost systemd[1]: Starting dnf makecache... Dec 5 03:17:20 localhost systemd[1]: Starting nova_virtqemud container... Dec 5 03:17:20 localhost tripleo-start-podman-container[62610]: Creating additional drop-in dependency for "nova_virtqemud" (86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479) Dec 5 03:17:20 localhost systemd[1]: Reloading. Dec 5 03:17:20 localhost systemd-sysv-generator[62666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:20 localhost dnf[62609]: Updating Subscription Management repositories. Dec 5 03:17:20 localhost systemd-rc-local-generator[62663]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:21 localhost systemd[1]: Started nova_virtqemud container. Dec 5 03:17:21 localhost python3[62691]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:21 localhost systemd[1]: Reloading. Dec 5 03:17:21 localhost systemd-rc-local-generator[62718]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:21 localhost systemd-sysv-generator[62724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:22 localhost systemd[1]: Starting nova_virtsecretd container... Dec 5 03:17:22 localhost tripleo-start-podman-container[62731]: Creating additional drop-in dependency for "nova_virtsecretd" (37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634) Dec 5 03:17:22 localhost systemd[1]: Reloading. Dec 5 03:17:22 localhost systemd-rc-local-generator[62780]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:22 localhost systemd-sysv-generator[62784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:22 localhost systemd[1]: Started nova_virtsecretd container. Dec 5 03:17:22 localhost dnf[62609]: Metadata cache refreshed recently. Dec 5 03:17:22 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 5 03:17:22 localhost systemd[1]: Finished dnf makecache. Dec 5 03:17:22 localhost systemd[1]: dnf-makecache.service: Consumed 2.025s CPU time. Dec 5 03:17:23 localhost python3[62811]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:23 localhost systemd[1]: Reloading. Dec 5 03:17:23 localhost systemd-rc-local-generator[62835]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:23 localhost systemd-sysv-generator[62843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:23 localhost systemd[1]: Starting nova_virtstoraged container... Dec 5 03:17:23 localhost tripleo-start-podman-container[62851]: Creating additional drop-in dependency for "nova_virtstoraged" (77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8) Dec 5 03:17:23 localhost systemd[1]: Reloading. Dec 5 03:17:23 localhost systemd-rc-local-generator[62910]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:23 localhost systemd-sysv-generator[62915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:23 localhost systemd[1]: Started nova_virtstoraged container. Dec 5 03:17:24 localhost python3[62937]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:17:24 localhost systemd[1]: Reloading. Dec 5 03:17:24 localhost systemd-rc-local-generator[62963]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:17:24 localhost systemd-sysv-generator[62969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:17:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:17:24 localhost systemd[1]: Starting rsyslog container... Dec 5 03:17:25 localhost systemd[1]: tmp-crun.cAOIT8.mount: Deactivated successfully. Dec 5 03:17:25 localhost systemd[1]: Started libcrun container. Dec 5 03:17:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:25 localhost podman[62977]: 2025-12-05 08:17:25.106481878 +0000 UTC m=+0.120825607 container init 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:49:49Z, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=rsyslog, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container) Dec 5 03:17:25 localhost podman[62977]: 2025-12-05 08:17:25.114803648 +0000 UTC m=+0.129147427 container start 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, release=1761123044) Dec 5 03:17:25 localhost podman[62977]: rsyslog Dec 5 03:17:25 localhost systemd[1]: Started rsyslog container. Dec 5 03:17:25 localhost systemd[1]: libpod-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:25 localhost podman[63013]: 2025-12-05 08:17:25.280069993 +0000 UTC m=+0.047021320 container died 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, build-date=2025-11-18T22:49:49Z) Dec 5 03:17:25 localhost podman[63013]: 2025-12-05 08:17:25.309640198 +0000 UTC m=+0.076591465 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=rsyslog, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public) Dec 5 03:17:25 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:17:25 localhost podman[63028]: 2025-12-05 08:17:25.399554528 +0000 UTC m=+0.057732206 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com) Dec 5 03:17:25 localhost podman[63028]: rsyslog Dec 5 03:17:25 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 5 03:17:25 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Dec 5 03:17:25 localhost systemd[1]: Stopped rsyslog container. Dec 5 03:17:25 localhost python3[63056]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:25 localhost systemd[1]: Starting rsyslog container... Dec 5 03:17:25 localhost systemd[1]: Started libcrun container. Dec 5 03:17:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:25 localhost podman[63057]: 2025-12-05 08:17:25.712373366 +0000 UTC m=+0.098566653 container init 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64) Dec 5 03:17:25 localhost podman[63057]: 2025-12-05 08:17:25.72179703 +0000 UTC m=+0.107990307 container start 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, release=1761123044, container_name=rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-rsyslog-container) Dec 5 03:17:25 localhost podman[63057]: rsyslog Dec 5 03:17:25 localhost systemd[1]: Started rsyslog container. Dec 5 03:17:25 localhost systemd[1]: libpod-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:25 localhost podman[63078]: 2025-12-05 08:17:25.853959051 +0000 UTC m=+0.034545591 container died 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:49:49Z, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:17:25 localhost podman[63078]: 2025-12-05 08:17:25.879476928 +0000 UTC m=+0.060063398 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=rsyslog, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:17:25 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:17:25 localhost podman[63090]: 2025-12-05 08:17:25.967633924 +0000 UTC m=+0.058739647 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, container_name=rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:17:25 localhost podman[63090]: rsyslog Dec 5 03:17:25 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 5 03:17:26 localhost systemd[1]: var-lib-containers-storage-overlay-dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59-merged.mount: Deactivated successfully. Dec 5 03:17:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0-userdata-shm.mount: Deactivated successfully. Dec 5 03:17:26 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Dec 5 03:17:26 localhost systemd[1]: Stopped rsyslog container. Dec 5 03:17:26 localhost systemd[1]: Starting rsyslog container... Dec 5 03:17:26 localhost systemd[1]: Started libcrun container. Dec 5 03:17:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:26 localhost podman[63147]: 2025-12-05 08:17:26.233477483 +0000 UTC m=+0.114334505 container init 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:17:26 localhost podman[63147]: 2025-12-05 08:17:26.24297728 +0000 UTC m=+0.123834302 container start 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, version=17.1.12, container_name=rsyslog, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Dec 5 03:17:26 localhost podman[63147]: rsyslog Dec 5 03:17:26 localhost systemd[1]: Started rsyslog container. Dec 5 03:17:26 localhost systemd[1]: libpod-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:26 localhost podman[63171]: 2025-12-05 08:17:26.38410612 +0000 UTC m=+0.041621352 container died 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=rsyslog, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public) Dec 5 03:17:26 localhost podman[63171]: 2025-12-05 08:17:26.411535148 +0000 UTC m=+0.069050340 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, version=17.1.12, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Dec 5 03:17:26 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:17:26 localhost podman[63211]: 2025-12-05 08:17:26.496721841 +0000 UTC m=+0.056681414 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., container_name=rsyslog, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, config_id=tripleo_step3) Dec 5 03:17:26 localhost podman[63211]: rsyslog Dec 5 03:17:26 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 5 03:17:26 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Dec 5 03:17:26 localhost systemd[1]: Stopped rsyslog container. Dec 5 03:17:26 localhost systemd[1]: Starting rsyslog container... Dec 5 03:17:26 localhost systemd[1]: Started libcrun container. Dec 5 03:17:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:26 localhost podman[63253]: 2025-12-05 08:17:26.982474253 +0000 UTC m=+0.121442207 container init 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 03:17:26 localhost podman[63253]: 2025-12-05 08:17:26.992307529 +0000 UTC m=+0.131275483 container start 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:17:26 localhost podman[63253]: rsyslog Dec 5 03:17:26 localhost systemd[1]: Started rsyslog container. Dec 5 03:17:27 localhost systemd[1]: tmp-crun.EL7Yzx.mount: Deactivated successfully. Dec 5 03:17:27 localhost systemd[1]: libpod-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:27 localhost python3[63281]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005546419 step=3 update_config_hash_only=False Dec 5 03:17:27 localhost podman[63291]: 2025-12-05 08:17:27.159124354 +0000 UTC m=+0.052468431 container died 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, container_name=rsyslog, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Dec 5 03:17:27 localhost systemd[1]: tmp-crun.m42weu.mount: Deactivated successfully. Dec 5 03:17:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0-userdata-shm.mount: Deactivated successfully. Dec 5 03:17:27 localhost systemd[1]: var-lib-containers-storage-overlay-dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59-merged.mount: Deactivated successfully. Dec 5 03:17:27 localhost podman[63291]: 2025-12-05 08:17:27.190534075 +0000 UTC m=+0.083878102 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 5 03:17:27 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:17:27 localhost podman[63303]: 2025-12-05 08:17:27.292670078 +0000 UTC m=+0.065911391 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-rsyslog) Dec 5 03:17:27 localhost podman[63303]: rsyslog Dec 5 03:17:27 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 5 03:17:27 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Dec 5 03:17:27 localhost systemd[1]: Stopped rsyslog container. Dec 5 03:17:27 localhost systemd[1]: Starting rsyslog container... Dec 5 03:17:27 localhost systemd[1]: Started libcrun container. Dec 5 03:17:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad84ae45b1aa10863a6214bd87571ffb83796ff32137be2712def078a89ca59/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 5 03:17:27 localhost podman[63332]: 2025-12-05 08:17:27.554520612 +0000 UTC m=+0.119213467 container init 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64) Dec 5 03:17:27 localhost podman[63332]: 2025-12-05 08:17:27.564114172 +0000 UTC m=+0.128807027 container start 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3) Dec 5 03:17:27 localhost podman[63332]: rsyslog Dec 5 03:17:27 localhost systemd[1]: Started rsyslog container. Dec 5 03:17:27 localhost python3[63331]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:17:27 localhost systemd[1]: libpod-588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0.scope: Deactivated successfully. Dec 5 03:17:27 localhost podman[63354]: 2025-12-05 08:17:27.730539783 +0000 UTC m=+0.057410535 container died 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 5 03:17:27 localhost podman[63354]: 2025-12-05 08:17:27.756839305 +0000 UTC m=+0.083710027 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 5 03:17:27 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:17:27 localhost podman[63382]: 2025-12-05 08:17:27.84207939 +0000 UTC m=+0.060107770 container cleanup 588024edd18d9bbf4fefb7bff641ff19367642a533b7cec2841303846bd2e8a0 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7903f5b91b4ea11c31e5f2ba22a27dd3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=rsyslog, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, distribution-scope=public) Dec 5 03:17:27 localhost podman[63382]: rsyslog Dec 5 03:17:27 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 5 03:17:27 localhost python3[63395]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 5 03:17:28 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Dec 5 03:17:28 localhost systemd[1]: Stopped rsyslog container. Dec 5 03:17:28 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Dec 5 03:17:28 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 5 03:17:28 localhost systemd[1]: Failed to start rsyslog container. Dec 5 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:17:31 localhost systemd[1]: tmp-crun.xX6mTO.mount: Deactivated successfully. Dec 5 03:17:31 localhost podman[63397]: 2025-12-05 08:17:31.192453496 +0000 UTC m=+0.084434019 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd) Dec 5 03:17:31 localhost podman[63397]: 2025-12-05 08:17:31.201283602 +0000 UTC m=+0.093264155 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:17:31 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:17:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:17:32 localhost podman[63418]: 2025-12-05 08:17:32.187358883 +0000 UTC m=+0.076254175 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:17:32 localhost podman[63418]: 2025-12-05 08:17:32.197323213 +0000 UTC m=+0.086218525 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 5 03:17:32 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:17:33 localhost sshd[63437]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:17:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:17:37 localhost podman[63439]: 2025-12-05 08:17:37.202342986 +0000 UTC m=+0.089854149 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:17:37 localhost podman[63439]: 2025-12-05 08:17:37.414136076 +0000 UTC m=+0.301647179 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Dec 5 03:17:37 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:18:02 localhost podman[63544]: 2025-12-05 08:18:02.216396643 +0000 UTC m=+0.105606392 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, config_id=tripleo_step3, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:18:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:18:02 localhost podman[63544]: 2025-12-05 08:18:02.255761323 +0000 UTC m=+0.144971012 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:18:02 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:18:02 localhost systemd[1]: tmp-crun.VxJmcw.mount: Deactivated successfully. Dec 5 03:18:02 localhost podman[63565]: 2025-12-05 08:18:02.33182773 +0000 UTC m=+0.082155918 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:18:02 localhost podman[63565]: 2025-12-05 08:18:02.370694375 +0000 UTC m=+0.121022533 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, release=1761123044) Dec 5 03:18:02 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:18:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:18:08 localhost podman[63584]: 2025-12-05 08:18:08.195986247 +0000 UTC m=+0.087265759 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:18:08 localhost podman[63584]: 2025-12-05 08:18:08.425399647 +0000 UTC m=+0.316679189 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:18:08 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:18:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:18:33 localhost podman[63613]: 2025-12-05 08:18:33.192483236 +0000 UTC m=+0.079827245 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:18:33 localhost podman[63613]: 2025-12-05 08:18:33.204748589 +0000 UTC m=+0.092092588 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 5 03:18:33 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:18:33 localhost podman[63614]: 2025-12-05 08:18:33.260557553 +0000 UTC m=+0.142978778 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=) Dec 5 03:18:33 localhost podman[63614]: 2025-12-05 08:18:33.272750424 +0000 UTC m=+0.155171649 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc.) Dec 5 03:18:33 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:18:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:18:39 localhost systemd[1]: tmp-crun.eGWBQI.mount: Deactivated successfully. Dec 5 03:18:39 localhost podman[63652]: 2025-12-05 08:18:39.206228847 +0000 UTC m=+0.091734838 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=metrics_qdr, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true) Dec 5 03:18:39 localhost podman[63652]: 2025-12-05 08:18:39.433421458 +0000 UTC m=+0.318927449 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 5 03:18:39 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:19:04 localhost systemd[1]: tmp-crun.WKX3rz.mount: Deactivated successfully. Dec 5 03:19:04 localhost podman[63758]: 2025-12-05 08:19:04.193727423 +0000 UTC m=+0.084320586 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-iscsid-container, release=1761123044, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Dec 5 03:19:04 localhost systemd[1]: tmp-crun.Q1w8aK.mount: Deactivated successfully. Dec 5 03:19:04 localhost podman[63758]: 2025-12-05 08:19:04.23139969 +0000 UTC m=+0.121992783 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:19:04 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:19:04 localhost podman[63759]: 2025-12-05 08:19:04.237381217 +0000 UTC m=+0.123877543 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git) Dec 5 03:19:04 localhost podman[63759]: 2025-12-05 08:19:04.317477381 +0000 UTC m=+0.203973707 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:19:04 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:19:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:19:10 localhost podman[63799]: 2025-12-05 08:19:10.190637117 +0000 UTC m=+0.082633234 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:19:10 localhost podman[63799]: 2025-12-05 08:19:10.380788359 +0000 UTC m=+0.272784466 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:19:10 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:19:35 localhost systemd[1]: tmp-crun.vZ7xXp.mount: Deactivated successfully. Dec 5 03:19:35 localhost podman[63826]: 2025-12-05 08:19:35.202289522 +0000 UTC m=+0.087219277 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public) Dec 5 03:19:35 localhost podman[63826]: 2025-12-05 08:19:35.236517371 +0000 UTC m=+0.121471837 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container) Dec 5 03:19:35 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:19:35 localhost podman[63827]: 2025-12-05 08:19:35.247835395 +0000 UTC m=+0.131275224 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, tcib_managed=true, container_name=collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, version=17.1.12) Dec 5 03:19:35 localhost podman[63827]: 2025-12-05 08:19:35.331714417 +0000 UTC m=+0.215154306 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, container_name=collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 5 03:19:35 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:19:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:19:41 localhost podman[63866]: 2025-12-05 08:19:41.189548485 +0000 UTC m=+0.078067511 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Dec 5 03:19:41 localhost podman[63866]: 2025-12-05 08:19:41.390720203 +0000 UTC m=+0.279239229 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 03:19:41 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:20:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:20:06 localhost systemd[1]: tmp-crun.kAySrf.mount: Deactivated successfully. Dec 5 03:20:06 localhost podman[63974]: 2025-12-05 08:20:06.207538061 +0000 UTC m=+0.090610924 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:20:06 localhost podman[63974]: 2025-12-05 08:20:06.24463379 +0000 UTC m=+0.127706703 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z) Dec 5 03:20:06 localhost podman[63975]: 2025-12-05 08:20:06.25360727 +0000 UTC m=+0.132770580 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 03:20:06 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:20:06 localhost podman[63975]: 2025-12-05 08:20:06.267580497 +0000 UTC m=+0.146743827 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:20:06 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:20:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:20:12 localhost podman[64014]: 2025-12-05 08:20:12.208981217 +0000 UTC m=+0.092785991 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 5 03:20:12 localhost podman[64014]: 2025-12-05 08:20:12.430236683 +0000 UTC m=+0.314041457 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, architecture=x86_64) Dec 5 03:20:12 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:20:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:20:37 localhost podman[64043]: 2025-12-05 08:20:37.192325175 +0000 UTC m=+0.080217280 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public) Dec 5 03:20:37 localhost podman[64044]: 2025-12-05 08:20:37.252458066 +0000 UTC m=+0.134596863 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:20:37 localhost podman[64044]: 2025-12-05 08:20:37.263689342 +0000 UTC m=+0.145828159 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, url=https://www.redhat.com) Dec 5 03:20:37 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:20:37 localhost podman[64043]: 2025-12-05 08:20:37.282848811 +0000 UTC m=+0.170740886 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, architecture=x86_64, com.redhat.component=openstack-iscsid-container) Dec 5 03:20:37 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:20:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:20:43 localhost podman[64083]: 2025-12-05 08:20:43.207474802 +0000 UTC m=+0.089874285 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:20:43 localhost podman[64083]: 2025-12-05 08:20:43.407191099 +0000 UTC m=+0.289590582 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true) Dec 5 03:20:43 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:21:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:21:08 localhost systemd[1]: tmp-crun.YX4DTo.mount: Deactivated successfully. Dec 5 03:21:08 localhost podman[64242]: 2025-12-05 08:21:08.213414373 +0000 UTC m=+0.090400832 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, container_name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:21:08 localhost podman[64242]: 2025-12-05 08:21:08.219031297 +0000 UTC m=+0.096017696 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-collectd-container, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 5 03:21:08 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:21:08 localhost podman[64241]: 2025-12-05 08:21:08.305591567 +0000 UTC m=+0.182556595 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-iscsid, tcib_managed=true, release=1761123044, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:21:08 localhost podman[64241]: 2025-12-05 08:21:08.316151425 +0000 UTC m=+0.193116463 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 5 03:21:08 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:21:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:21:14 localhost podman[64278]: 2025-12-05 08:21:14.186934937 +0000 UTC m=+0.079752812 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true) Dec 5 03:21:14 localhost podman[64278]: 2025-12-05 08:21:14.379710148 +0000 UTC m=+0.272528023 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:21:14 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:21:18 localhost python3[64354]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:19 localhost python3[64399]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922878.7170198-107425-117067263896564/source _original_basename=tmpqubp5dlh follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:20 localhost python3[64461]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:20 localhost python3[64504]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922879.9378242-107570-174968296360832/source _original_basename=tmpfbevm4t0 follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:21 localhost python3[64566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:21 localhost python3[64609]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922880.7944224-107623-188680075198206/source _original_basename=tmp7yx0bpds follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:22 localhost python3[64671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:22 localhost python3[64714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922881.7613716-107687-221288546550566/source _original_basename=tmpnvlz5acg follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:23 localhost python3[64744]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 5 03:21:23 localhost systemd[1]: Reloading. Dec 5 03:21:23 localhost systemd-rc-local-generator[64772]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:23 localhost systemd-sysv-generator[64775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:23 localhost systemd[1]: Reloading. Dec 5 03:21:23 localhost systemd-rc-local-generator[64806]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:23 localhost systemd-sysv-generator[64810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:24 localhost python3[64834]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:21:24 localhost systemd[1]: Reloading. Dec 5 03:21:24 localhost systemd-sysv-generator[64865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:24 localhost systemd-rc-local-generator[64860]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:24 localhost systemd[1]: Reloading. Dec 5 03:21:24 localhost systemd-rc-local-generator[64896]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:24 localhost systemd-sysv-generator[64900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:24 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Dec 5 03:21:25 localhost python3[64926]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:21:25 localhost systemd[1]: Reloading. Dec 5 03:21:25 localhost systemd-sysv-generator[64955]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:25 localhost systemd-rc-local-generator[64950]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:25 localhost python3[65010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:26 localhost python3[65053]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922885.552252-107779-150013264563459/source _original_basename=tmp95_rone0 follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:26 localhost python3[65083]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:21:26 localhost systemd[1]: Reloading. Dec 5 03:21:26 localhost systemd-sysv-generator[65113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:26 localhost systemd-rc-local-generator[65109]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:27 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Dec 5 03:21:27 localhost python3[65138]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:21:28 localhost ansible-async_wrapper.py[65310]: Invoked with 620972684905 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922888.3985407-107860-189439304931828/AnsiballZ_command.py _ Dec 5 03:21:28 localhost ansible-async_wrapper.py[65313]: Starting module and watcher Dec 5 03:21:28 localhost ansible-async_wrapper.py[65313]: Start watching 65314 (3600) Dec 5 03:21:28 localhost ansible-async_wrapper.py[65314]: Start module (65314) Dec 5 03:21:28 localhost ansible-async_wrapper.py[65310]: Return async_wrapper task started. Dec 5 03:21:29 localhost python3[65332]: ansible-ansible.legacy.async_status Invoked with jid=620972684905.65310 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:21:33 localhost puppet-user[65334]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:21:33 localhost puppet-user[65334]: (file: /etc/puppet/hiera.yaml) Dec 5 03:21:33 localhost puppet-user[65334]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:21:33 localhost puppet-user[65334]: (file & line not available) Dec 5 03:21:33 localhost puppet-user[65334]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:21:33 localhost puppet-user[65334]: (file & line not available) Dec 5 03:21:33 localhost puppet-user[65334]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 5 03:21:33 localhost puppet-user[65334]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:21:33 localhost puppet-user[65334]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:21:33 localhost puppet-user[65334]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:21:33 localhost puppet-user[65334]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:21:33 localhost puppet-user[65334]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:21:33 localhost puppet-user[65334]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:21:33 localhost puppet-user[65334]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:21:33 localhost puppet-user[65334]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:21:33 localhost puppet-user[65334]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:21:33 localhost puppet-user[65334]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:21:33 localhost puppet-user[65334]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:21:33 localhost puppet-user[65334]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:21:33 localhost puppet-user[65334]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:21:33 localhost puppet-user[65334]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:21:33 localhost puppet-user[65334]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:21:33 localhost puppet-user[65334]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:21:33 localhost puppet-user[65334]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:21:33 localhost puppet-user[65334]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 5 03:21:33 localhost puppet-user[65334]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.23 seconds Dec 5 03:21:33 localhost ansible-async_wrapper.py[65313]: 65314 still running (3600) Dec 5 03:21:38 localhost ansible-async_wrapper.py[65313]: 65314 still running (3595) Dec 5 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:21:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:21:39 localhost podman[65476]: 2025-12-05 08:21:39.215481938 +0000 UTC m=+0.092265729 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 5 03:21:39 localhost podman[65476]: 2025-12-05 08:21:39.24879037 +0000 UTC m=+0.125574161 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:21:39 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:21:39 localhost podman[65475]: 2025-12-05 08:21:39.268370657 +0000 UTC m=+0.144690023 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid) Dec 5 03:21:39 localhost podman[65475]: 2025-12-05 08:21:39.30461183 +0000 UTC m=+0.180931136 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:21:39 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:21:39 localhost python3[65545]: ansible-ansible.legacy.async_status Invoked with jid=620972684905.65310 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:21:41 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 03:21:41 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 03:21:41 localhost systemd[1]: Reloading. Dec 5 03:21:42 localhost systemd-sysv-generator[65667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:42 localhost systemd-rc-local-generator[65662]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:42 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 03:21:43 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 03:21:43 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 03:21:43 localhost systemd[1]: man-db-cache-update.service: Consumed 1.362s CPU time. Dec 5 03:21:43 localhost systemd[1]: run-r64a3033fda174c23bebbb37ea5c5c430.service: Deactivated successfully. Dec 5 03:21:43 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Dec 5 03:21:43 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}8ca07ac75f5bca23051c087a8a5d4f7831d895c4167c6e73d7484f51b777bf7f' Dec 5 03:21:43 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Dec 5 03:21:43 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Dec 5 03:21:43 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Dec 5 03:21:43 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Dec 5 03:21:43 localhost ansible-async_wrapper.py[65313]: 65314 still running (3590) Dec 5 03:21:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:21:45 localhost podman[66677]: 2025-12-05 08:21:45.208802055 +0000 UTC m=+0.087600225 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:21:45 localhost podman[66677]: 2025-12-05 08:21:45.4029845 +0000 UTC m=+0.281782670 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, architecture=x86_64, release=1761123044, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:21:45 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:21:48 localhost puppet-user[65334]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Dec 5 03:21:48 localhost systemd[1]: Reloading. Dec 5 03:21:48 localhost systemd-sysv-generator[66739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:48 localhost systemd-rc-local-generator[66736]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:48 localhost ansible-async_wrapper.py[65313]: 65314 still running (3585) Dec 5 03:21:49 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Dec 5 03:21:49 localhost snmpd[66746]: Can't find directory of RPM packages Dec 5 03:21:49 localhost snmpd[66746]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Dec 5 03:21:49 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Dec 5 03:21:49 localhost systemd[1]: Reloading. Dec 5 03:21:49 localhost systemd-rc-local-generator[66772]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:49 localhost systemd-sysv-generator[66775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:49 localhost systemd[1]: Reloading. Dec 5 03:21:49 localhost systemd-sysv-generator[66831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:49 localhost systemd-rc-local-generator[66828]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:49 localhost puppet-user[65334]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Dec 5 03:21:49 localhost puppet-user[65334]: Notice: Applied catalog in 16.37 seconds Dec 5 03:21:49 localhost puppet-user[65334]: Application: Dec 5 03:21:49 localhost puppet-user[65334]: Initial environment: production Dec 5 03:21:49 localhost puppet-user[65334]: Converged environment: production Dec 5 03:21:49 localhost puppet-user[65334]: Run mode: user Dec 5 03:21:49 localhost puppet-user[65334]: Changes: Dec 5 03:21:49 localhost puppet-user[65334]: Total: 8 Dec 5 03:21:49 localhost puppet-user[65334]: Events: Dec 5 03:21:49 localhost puppet-user[65334]: Success: 8 Dec 5 03:21:49 localhost puppet-user[65334]: Total: 8 Dec 5 03:21:49 localhost puppet-user[65334]: Resources: Dec 5 03:21:49 localhost puppet-user[65334]: Restarted: 1 Dec 5 03:21:49 localhost puppet-user[65334]: Changed: 8 Dec 5 03:21:49 localhost puppet-user[65334]: Out of sync: 8 Dec 5 03:21:49 localhost puppet-user[65334]: Total: 19 Dec 5 03:21:49 localhost puppet-user[65334]: Time: Dec 5 03:21:49 localhost puppet-user[65334]: Filebucket: 0.00 Dec 5 03:21:49 localhost puppet-user[65334]: Schedule: 0.00 Dec 5 03:21:49 localhost puppet-user[65334]: Augeas: 0.01 Dec 5 03:21:49 localhost puppet-user[65334]: File: 0.09 Dec 5 03:21:49 localhost puppet-user[65334]: Config retrieval: 0.29 Dec 5 03:21:49 localhost puppet-user[65334]: Service: 1.28 Dec 5 03:21:49 localhost puppet-user[65334]: Transaction evaluation: 16.36 Dec 5 03:21:49 localhost puppet-user[65334]: Catalog application: 16.37 Dec 5 03:21:49 localhost puppet-user[65334]: Last run: 1764922909 Dec 5 03:21:49 localhost puppet-user[65334]: Exec: 5.07 Dec 5 03:21:49 localhost puppet-user[65334]: Package: 9.75 Dec 5 03:21:49 localhost puppet-user[65334]: Total: 16.37 Dec 5 03:21:49 localhost puppet-user[65334]: Version: Dec 5 03:21:49 localhost puppet-user[65334]: Config: 1764922893 Dec 5 03:21:49 localhost puppet-user[65334]: Puppet: 7.10.0 Dec 5 03:21:50 localhost ansible-async_wrapper.py[65314]: Module complete (65314) Dec 5 03:21:50 localhost python3[66836]: ansible-ansible.legacy.async_status Invoked with jid=620972684905.65310 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:21:50 localhost python3[66852]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:21:51 localhost python3[66868]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:21:51 localhost python3[66918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:51 localhost python3[66936]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpif1rj051 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:21:52 localhost python3[66966]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:53 localhost python3[67069]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 5 03:21:53 localhost ansible-async_wrapper.py[65313]: Done in kid B. Dec 5 03:21:54 localhost python3[67088]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:54 localhost python3[67120]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:21:55 localhost python3[67170]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:55 localhost python3[67188]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:56 localhost python3[67250]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:56 localhost python3[67268]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:57 localhost python3[67330]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:57 localhost python3[67348]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:57 localhost python3[67440]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:58 localhost python3[67472]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:21:58 localhost python3[67519]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:21:58 localhost systemd[1]: Reloading. Dec 5 03:21:58 localhost systemd-sysv-generator[67550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:21:58 localhost systemd-rc-local-generator[67546]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:21:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:21:59 localhost python3[67606]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:21:59 localhost python3[67624]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:00 localhost python3[67686]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:22:00 localhost python3[67704]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:00 localhost python3[67734]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:00 localhost systemd[1]: Reloading. Dec 5 03:22:00 localhost systemd-rc-local-generator[67758]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:00 localhost systemd-sysv-generator[67761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:01 localhost systemd[1]: Starting Create netns directory... Dec 5 03:22:01 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 03:22:01 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 03:22:01 localhost systemd[1]: Finished Create netns directory. Dec 5 03:22:01 localhost python3[67807]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 5 03:22:03 localhost python3[67865]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 5 03:22:03 localhost podman[68003]: 2025-12-05 08:22:03.560285842 +0000 UTC m=+0.066841531 container create 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:22:03 localhost podman[68043]: 2025-12-05 08:22:03.591728977 +0000 UTC m=+0.066224423 container create 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Dec 5 03:22:03 localhost systemd[1]: Started libpod-conmon-21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.scope. Dec 5 03:22:03 localhost systemd[1]: Started libcrun container. Dec 5 03:22:03 localhost systemd[1]: Started libpod-conmon-808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.scope. Dec 5 03:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3992ab45cf22014f9d9042a8d9aa0064890f6d40c0b46ba7788c5c2c544ad5/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:03 localhost podman[68003]: 2025-12-05 08:22:03.522419839 +0000 UTC m=+0.028975558 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 5 03:22:03 localhost systemd[1]: Started libcrun container. Dec 5 03:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61edff4e636e5b27fc65056d8e3af9182b499697a247cb1b23f64e881b58c13e/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:03 localhost podman[68069]: 2025-12-05 08:22:03.635660327 +0000 UTC m=+0.089757302 container create 59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, container_name=configure_cms_options, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com) Dec 5 03:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:22:03 localhost podman[68003]: 2025-12-05 08:22:03.651355363 +0000 UTC m=+0.157911052 container init 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:22:03 localhost podman[68043]: 2025-12-05 08:22:03.55893167 +0000 UTC m=+0.033427136 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 5 03:22:03 localhost podman[68053]: 2025-12-05 08:22:03.665664336 +0000 UTC m=+0.131089891 container create bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc.) Dec 5 03:22:03 localhost systemd[1]: Started libpod-conmon-59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3.scope. Dec 5 03:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:22:03 localhost podman[68043]: 2025-12-05 08:22:03.681445046 +0000 UTC m=+0.155940502 container init 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:22:03 localhost podman[68068]: 2025-12-05 08:22:03.584804622 +0000 UTC m=+0.040518286 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 5 03:22:03 localhost podman[68069]: 2025-12-05 08:22:03.589390494 +0000 UTC m=+0.043487469 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 5 03:22:03 localhost systemd[1]: Started libcrun container. Dec 5 03:22:03 localhost systemd[1]: Started libpod-conmon-bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.scope. Dec 5 03:22:03 localhost podman[68068]: 2025-12-05 08:22:03.697670667 +0000 UTC m=+0.153384321 container create d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=nova_libvirt_init_secret, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:22:03 localhost podman[68069]: 2025-12-05 08:22:03.702023463 +0000 UTC m=+0.156120408 container init 59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64) Dec 5 03:22:03 localhost systemd[1]: Started libcrun container. Dec 5 03:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3249738e4d9421a1f001f331e5d9a6df3d763b74a379f4e69853dd9965f5c52/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:03 localhost podman[68069]: 2025-12-05 08:22:03.710939239 +0000 UTC m=+0.165036194 container start 59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, vcs-type=git, release=1761123044, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Dec 5 03:22:03 localhost podman[68069]: 2025-12-05 08:22:03.711120334 +0000 UTC m=+0.165217289 container attach 59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, io.openshift.expose-services=, container_name=configure_cms_options, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:22:03 localhost podman[68053]: 2025-12-05 08:22:03.6308988 +0000 UTC m=+0.096324385 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 5 03:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:22:03 localhost podman[68053]: 2025-12-05 08:22:03.735457668 +0000 UTC m=+0.200883233 container init bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, version=17.1.12, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:22:03 localhost podman[68003]: 2025-12-05 08:22:03.754313453 +0000 UTC m=+0.260869142 container start 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:22:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:22:03 localhost podman[68053]: 2025-12-05 08:22:03.760627448 +0000 UTC m=+0.226052993 container start bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Dec 5 03:22:03 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=36d3201998d10321ffa6261c2854a42f --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 5 03:22:03 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=36d3201998d10321ffa6261c2854a42f --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 5 03:22:03 localhost ovs-vsctl[68191]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Dec 5 03:22:03 localhost systemd[1]: libpod-59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3.scope: Deactivated successfully. Dec 5 03:22:03 localhost podman[68069]: 2025-12-05 08:22:03.835702134 +0000 UTC m=+0.289799089 container died 59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, container_name=configure_cms_options, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Dec 5 03:22:03 localhost podman[68162]: 2025-12-05 08:22:03.844015671 +0000 UTC m=+0.080340920 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:22:03 localhost podman[68043]: 2025-12-05 08:22:03.887758356 +0000 UTC m=+0.362253812 container start 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, name=rhosp17/openstack-cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64) Dec 5 03:22:03 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 5 03:22:03 localhost podman[68140]: 2025-12-05 08:22:03.902847334 +0000 UTC m=+0.187468908 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:22:03 localhost podman[68140]: 2025-12-05 08:22:03.911683237 +0000 UTC m=+0.196304791 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-cron-container, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z) Dec 5 03:22:03 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:22:03 localhost podman[68162]: 2025-12-05 08:22:03.93504443 +0000 UTC m=+0.171369669 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Dec 5 03:22:03 localhost podman[68162]: unhealthy Dec 5 03:22:03 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:22:03 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 03:22:03 localhost systemd[1]: Started libpod-conmon-d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89.scope. Dec 5 03:22:03 localhost systemd[1]: Started libcrun container. Dec 5 03:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e8030a61b73ae24115cd28491b7f862cc70b395fffe88ad09999abdf65e61e/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e8030a61b73ae24115cd28491b7f862cc70b395fffe88ad09999abdf65e61e/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e8030a61b73ae24115cd28491b7f862cc70b395fffe88ad09999abdf65e61e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:03 localhost podman[68068]: 2025-12-05 08:22:03.980136067 +0000 UTC m=+0.435849731 container init d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-type=git, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, distribution-scope=public) Dec 5 03:22:03 localhost podman[68068]: 2025-12-05 08:22:03.988192747 +0000 UTC m=+0.443906421 container start d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_libvirt_init_secret, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 5 03:22:03 localhost podman[68068]: 2025-12-05 08:22:03.992914073 +0000 UTC m=+0.448627737 container attach d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, container_name=nova_libvirt_init_secret, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64) Dec 5 03:22:04 localhost podman[68131]: 2025-12-05 08:22:04.00476235 +0000 UTC m=+0.298179697 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute) Dec 5 03:22:04 localhost podman[68196]: 2025-12-05 08:22:04.046320538 +0000 UTC m=+0.204693542 container cleanup 59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 5 03:22:04 localhost systemd[1]: libpod-conmon-59de86045a73111c565b2c3622a3b92c7e74e9b1c9cb283f83e3523b04e2ccb3.scope: Deactivated successfully. Dec 5 03:22:04 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Dec 5 03:22:04 localhost systemd[1]: libpod-d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89.scope: Deactivated successfully. Dec 5 03:22:04 localhost podman[68068]: 2025-12-05 08:22:04.133629222 +0000 UTC m=+0.589342886 container died d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_libvirt_init_secret, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vendor=Red Hat, Inc.) Dec 5 03:22:04 localhost podman[68131]: 2025-12-05 08:22:04.146411858 +0000 UTC m=+0.439829205 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_compute, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Dec 5 03:22:04 localhost podman[68131]: unhealthy Dec 5 03:22:04 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:22:04 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed with result 'exit-code'. Dec 5 03:22:04 localhost podman[68310]: 2025-12-05 08:22:04.249057797 +0000 UTC m=+0.108842382 container cleanup d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, vcs-type=git) Dec 5 03:22:04 localhost systemd[1]: libpod-conmon-d77588dd67871a2a91b5a46b942c0a496ddfbd9710af2c688b5b77d104c11c89.scope: Deactivated successfully. Dec 5 03:22:04 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Dec 5 03:22:04 localhost podman[68404]: 2025-12-05 08:22:04.371969644 +0000 UTC m=+0.055106327 container create 11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 5 03:22:04 localhost systemd[1]: Started libpod-conmon-11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11.scope. Dec 5 03:22:04 localhost podman[68421]: 2025-12-05 08:22:04.412643715 +0000 UTC m=+0.070172545 container create 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:22:04 localhost systemd[1]: Started libcrun container. Dec 5 03:22:04 localhost podman[68404]: 2025-12-05 08:22:04.425312727 +0000 UTC m=+0.108449410 container init 11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, release=1761123044) Dec 5 03:22:04 localhost podman[68404]: 2025-12-05 08:22:04.433204801 +0000 UTC m=+0.116341504 container start 11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=setup_ovs_manager, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 5 03:22:04 localhost podman[68404]: 2025-12-05 08:22:04.433611834 +0000 UTC m=+0.116748517 container attach 11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=setup_ovs_manager, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible) Dec 5 03:22:04 localhost systemd[1]: Started libpod-conmon-94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.scope. Dec 5 03:22:04 localhost podman[68404]: 2025-12-05 08:22:04.343931136 +0000 UTC m=+0.027067839 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 5 03:22:04 localhost systemd[1]: Started libcrun container. Dec 5 03:22:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f7fa99b1a308b31b9c9168cd0ceb32b0fd11de133cd67585d173dee9412861/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:04 localhost podman[68421]: 2025-12-05 08:22:04.375004808 +0000 UTC m=+0.032533678 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:22:04 localhost podman[68421]: 2025-12-05 08:22:04.494674855 +0000 UTC m=+0.152203755 container init 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64) Dec 5 03:22:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:22:04 localhost podman[68421]: 2025-12-05 08:22:04.52033542 +0000 UTC m=+0.177864270 container start 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target) Dec 5 03:22:04 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:22:04 localhost podman[68460]: 2025-12-05 08:22:04.593856057 +0000 UTC m=+0.068883424 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, build-date=2025-11-19T00:36:58Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public) Dec 5 03:22:04 localhost podman[68460]: 2025-12-05 08:22:04.928502214 +0000 UTC m=+0.403529531 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:22:04 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:22:05 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Dec 5 03:22:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:22:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4466 writes, 20K keys, 4466 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4466 writes, 463 syncs, 9.65 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 72 writes, 111 keys, 72 commit groups, 1.0 writes per commit group, ingest: 0.03 MB, 0.00 MB/s#012Interval WAL: 72 writes, 36 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:22:07 localhost ovs-vsctl[68633]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Dec 5 03:22:07 localhost systemd[1]: libpod-11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11.scope: Deactivated successfully. Dec 5 03:22:07 localhost systemd[1]: libpod-11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11.scope: Consumed 2.857s CPU time. Dec 5 03:22:07 localhost podman[68634]: 2025-12-05 08:22:07.371724464 +0000 UTC m=+0.062872178 container died 11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=setup_ovs_manager, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 5 03:22:07 localhost systemd[1]: var-lib-containers-storage-overlay-497fd753a0e2e7e1090fcc93329b46322da8f6ccfbd35f9de3e0c667abfcc067-merged.mount: Deactivated successfully. Dec 5 03:22:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11-userdata-shm.mount: Deactivated successfully. Dec 5 03:22:07 localhost podman[68634]: 2025-12-05 08:22:07.41746065 +0000 UTC m=+0.108608334 container cleanup 11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager) Dec 5 03:22:07 localhost systemd[1]: libpod-conmon-11cf15259b26007d3e0dd6e023be48c0dc09f96791b25ed8ccc1e3e07f6a4b11.scope: Deactivated successfully. Dec 5 03:22:07 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764921170 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764921170'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Dec 5 03:22:07 localhost podman[68749]: 2025-12-05 08:22:07.917155889 +0000 UTC m=+0.085676345 container create f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 5 03:22:07 localhost systemd[1]: Started libpod-conmon-f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.scope. Dec 5 03:22:07 localhost podman[68748]: 2025-12-05 08:22:07.977969862 +0000 UTC m=+0.147737787 container create 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:22:07 localhost systemd[1]: Started libcrun container. Dec 5 03:22:07 localhost podman[68748]: 2025-12-05 08:22:07.879276345 +0000 UTC m=+0.049044300 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 5 03:22:07 localhost podman[68749]: 2025-12-05 08:22:07.880029608 +0000 UTC m=+0.048550044 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 5 03:22:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc9caec78c737aaef1b1328c59bbfb368860f96b8588be11088d5655a4f2a7/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc9caec78c737aaef1b1328c59bbfb368860f96b8588be11088d5655a4f2a7/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc9caec78c737aaef1b1328c59bbfb368860f96b8588be11088d5655a4f2a7/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:08 localhost systemd[1]: Started libpod-conmon-0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.scope. Dec 5 03:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:22:08 localhost podman[68749]: 2025-12-05 08:22:08.022803471 +0000 UTC m=+0.191323907 container init f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:22:08 localhost systemd[1]: Started libcrun container. Dec 5 03:22:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92a44520f69c9e5a7bcda678bb32e8a18947b10b545d0d9d31f991c2dc6484c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92a44520f69c9e5a7bcda678bb32e8a18947b10b545d0d9d31f991c2dc6484c/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92a44520f69c9e5a7bcda678bb32e8a18947b10b545d0d9d31f991c2dc6484c/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Dec 5 03:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:22:08 localhost podman[68749]: 2025-12-05 08:22:08.046279349 +0000 UTC m=+0.214799785 container start f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller) Dec 5 03:22:08 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 5 03:22:08 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:22:08 localhost systemd[1]: Created slice User Slice of UID 0. Dec 5 03:22:08 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 5 03:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:22:08 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 5 03:22:08 localhost systemd[1]: Starting User Manager for UID 0... Dec 5 03:22:08 localhost podman[68748]: 2025-12-05 08:22:08.106024409 +0000 UTC m=+0.275792374 container init 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12) Dec 5 03:22:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:22:08 localhost podman[68748]: 2025-12-05 08:22:08.138449094 +0000 UTC m=+0.308217059 container start 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:22:08 localhost python3[67865]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=a79d47e79d9c2e42edb251b1a5fb6c64 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 5 03:22:08 localhost podman[68817]: 2025-12-05 08:22:08.214384196 +0000 UTC m=+0.074453458 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2025-11-19T00:14:25Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:22:08 localhost podman[68817]: 2025-12-05 08:22:08.298806751 +0000 UTC m=+0.158876053 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git) Dec 5 03:22:08 localhost podman[68817]: unhealthy Dec 5 03:22:08 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:22:08 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:22:08 localhost systemd[68811]: Queued start job for default target Main User Target. Dec 5 03:22:08 localhost systemd[68811]: Created slice User Application Slice. Dec 5 03:22:08 localhost systemd[68811]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 5 03:22:08 localhost systemd[68811]: Started Daily Cleanup of User's Temporary Directories. Dec 5 03:22:08 localhost systemd[68811]: Reached target Paths. Dec 5 03:22:08 localhost systemd[68811]: Reached target Timers. Dec 5 03:22:08 localhost systemd[68811]: Starting D-Bus User Message Bus Socket... Dec 5 03:22:08 localhost systemd[68811]: Starting Create User's Volatile Files and Directories... Dec 5 03:22:08 localhost podman[68789]: 2025-12-05 08:22:08.267629955 +0000 UTC m=+0.208297194 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:22:08 localhost systemd[68811]: Listening on D-Bus User Message Bus Socket. Dec 5 03:22:08 localhost systemd[68811]: Reached target Sockets. Dec 5 03:22:08 localhost systemd[68811]: Finished Create User's Volatile Files and Directories. Dec 5 03:22:08 localhost systemd[68811]: Reached target Basic System. Dec 5 03:22:08 localhost systemd[68811]: Reached target Main User Target. Dec 5 03:22:08 localhost systemd[68811]: Startup finished in 196ms. Dec 5 03:22:08 localhost systemd[1]: Started User Manager for UID 0. Dec 5 03:22:08 localhost systemd[1]: Started Session c9 of User root. Dec 5 03:22:08 localhost podman[68789]: 2025-12-05 08:22:08.40142469 +0000 UTC m=+0.342091939 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044) Dec 5 03:22:08 localhost podman[68789]: unhealthy Dec 5 03:22:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:22:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:22:08 localhost systemd[1]: session-c9.scope: Deactivated successfully. Dec 5 03:22:08 localhost kernel: device br-int entered promiscuous mode Dec 5 03:22:08 localhost NetworkManager[5960]: [1764922928.5015] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Dec 5 03:22:08 localhost systemd-udevd[68905]: Network interface NamePolicy= disabled on kernel command line. Dec 5 03:22:08 localhost python3[68924]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:22:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5166 writes, 22K keys, 5166 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5166 writes, 594 syncs, 8.70 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 96 writes, 136 keys, 96 commit groups, 1.0 writes per commit group, ingest: 0.04 MB, 0.00 MB/s#012Interval WAL: 96 writes, 48 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:22:09 localhost python3[68941]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:09 localhost python3[68957]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:22:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:22:09 localhost kernel: device genev_sys_6081 entered promiscuous mode Dec 5 03:22:09 localhost NetworkManager[5960]: [1764922929.5381] device (genev_sys_6081): carrier: link connected Dec 5 03:22:09 localhost NetworkManager[5960]: [1764922929.5388] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Dec 5 03:22:09 localhost systemd-udevd[68907]: Network interface NamePolicy= disabled on kernel command line. Dec 5 03:22:09 localhost podman[68973]: 2025-12-05 08:22:09.622827313 +0000 UTC m=+0.151305758 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:22:09 localhost python3[68974]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:09 localhost podman[68975]: 2025-12-05 08:22:09.59820084 +0000 UTC m=+0.125773787 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:22:09 localhost podman[68973]: 2025-12-05 08:22:09.663634387 +0000 UTC m=+0.192112812 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vendor=Red Hat, Inc., container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:22:09 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:22:09 localhost podman[68975]: 2025-12-05 08:22:09.730798367 +0000 UTC m=+0.258371324 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd) Dec 5 03:22:09 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:22:09 localhost python3[69028]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:10 localhost python3[69044]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:10 localhost python3[69063]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:22:10 localhost python3[69080]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:22:10 localhost python3[69097]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:22:11 localhost python3[69116]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:22:11 localhost python3[69132]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:22:12 localhost python3[69148]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:22:12 localhost python3[69209]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.064976-109400-275588763648105/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:13 localhost python3[69238]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.064976-109400-275588763648105/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:13 localhost python3[69267]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.064976-109400-275588763648105/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:14 localhost python3[69296]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.064976-109400-275588763648105/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:14 localhost python3[69325]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.064976-109400-275588763648105/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:15 localhost python3[69354]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764922932.064976-109400-275588763648105/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:15 localhost python3[69370]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 03:22:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:22:15 localhost systemd[1]: Reloading. Dec 5 03:22:15 localhost podman[69372]: 2025-12-05 08:22:15.536362537 +0000 UTC m=+0.072400602 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:22:15 localhost systemd-rc-local-generator[69409]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:15 localhost systemd-sysv-generator[69416]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:15 localhost podman[69372]: 2025-12-05 08:22:15.706297711 +0000 UTC m=+0.242335816 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_step1, io.buildah.version=1.41.4) Dec 5 03:22:15 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:22:16 localhost python3[69451]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:16 localhost systemd[1]: Reloading. Dec 5 03:22:16 localhost systemd-sysv-generator[69486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:16 localhost systemd-rc-local-generator[69481]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:16 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 5 03:22:16 localhost tripleo-start-podman-container[69492]: Creating additional drop-in dependency for "ceilometer_agent_compute" (21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d) Dec 5 03:22:16 localhost systemd[1]: Reloading. Dec 5 03:22:17 localhost systemd-rc-local-generator[69547]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:17 localhost systemd-sysv-generator[69551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:17 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 5 03:22:18 localhost python3[69575]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:18 localhost systemd[1]: Reloading. Dec 5 03:22:18 localhost systemd-sysv-generator[69608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:18 localhost systemd-rc-local-generator[69605]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:18 localhost systemd[1]: Stopping User Manager for UID 0... Dec 5 03:22:18 localhost systemd[68811]: Activating special unit Exit the Session... Dec 5 03:22:18 localhost systemd[68811]: Stopped target Main User Target. Dec 5 03:22:18 localhost systemd[68811]: Stopped target Basic System. Dec 5 03:22:18 localhost systemd[68811]: Stopped target Paths. Dec 5 03:22:18 localhost systemd[68811]: Stopped target Sockets. Dec 5 03:22:18 localhost systemd[68811]: Stopped target Timers. Dec 5 03:22:18 localhost systemd[68811]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 03:22:18 localhost systemd[68811]: Closed D-Bus User Message Bus Socket. Dec 5 03:22:18 localhost systemd[68811]: Stopped Create User's Volatile Files and Directories. Dec 5 03:22:18 localhost systemd[68811]: Removed slice User Application Slice. Dec 5 03:22:18 localhost systemd[68811]: Reached target Shutdown. Dec 5 03:22:18 localhost systemd[68811]: Finished Exit the Session. Dec 5 03:22:18 localhost systemd[68811]: Reached target Exit the Session. Dec 5 03:22:18 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 5 03:22:18 localhost systemd[1]: Stopped User Manager for UID 0. Dec 5 03:22:18 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Dec 5 03:22:18 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 5 03:22:18 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 5 03:22:18 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 5 03:22:18 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 5 03:22:18 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 5 03:22:18 localhost systemd[1]: Started ceilometer_agent_ipmi container. Dec 5 03:22:19 localhost python3[69643]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:19 localhost systemd[1]: Reloading. Dec 5 03:22:19 localhost systemd-sysv-generator[69672]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:19 localhost systemd-rc-local-generator[69669]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:19 localhost systemd[1]: Starting logrotate_crond container... Dec 5 03:22:19 localhost systemd[1]: Started logrotate_crond container. Dec 5 03:22:20 localhost python3[69710]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:20 localhost systemd[1]: Reloading. Dec 5 03:22:20 localhost systemd-sysv-generator[69742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:20 localhost systemd-rc-local-generator[69732]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:20 localhost systemd[1]: Starting nova_migration_target container... Dec 5 03:22:20 localhost systemd[1]: Started nova_migration_target container. Dec 5 03:22:21 localhost python3[69778]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:21 localhost systemd[1]: Reloading. Dec 5 03:22:21 localhost systemd-sysv-generator[69811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:21 localhost systemd-rc-local-generator[69807]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:21 localhost systemd[1]: Starting ovn_controller container... Dec 5 03:22:21 localhost tripleo-start-podman-container[69818]: Creating additional drop-in dependency for "ovn_controller" (f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd) Dec 5 03:22:21 localhost systemd[1]: Reloading. Dec 5 03:22:21 localhost systemd-sysv-generator[69881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:21 localhost systemd-rc-local-generator[69878]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:22 localhost systemd[1]: Started ovn_controller container. Dec 5 03:22:22 localhost python3[69902]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:22:22 localhost systemd[1]: Reloading. Dec 5 03:22:23 localhost systemd-rc-local-generator[69933]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:22:23 localhost systemd-sysv-generator[69936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:22:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:22:23 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 5 03:22:23 localhost systemd[1]: Started ovn_metadata_agent container. Dec 5 03:22:23 localhost python3[69985]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:25 localhost python3[70106]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005546419 step=4 update_config_hash_only=False Dec 5 03:22:25 localhost python3[70122]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:22:26 localhost python3[70138]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 5 03:22:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:22:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:22:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:22:34 localhost systemd[1]: tmp-crun.1XfeK4.mount: Deactivated successfully. Dec 5 03:22:34 localhost podman[70140]: 2025-12-05 08:22:34.189735444 +0000 UTC m=+0.082328621 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, container_name=logrotate_crond) Dec 5 03:22:34 localhost podman[70141]: 2025-12-05 08:22:34.191333004 +0000 UTC m=+0.078030828 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 5 03:22:34 localhost podman[70141]: 2025-12-05 08:22:34.245672157 +0000 UTC m=+0.132370011 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:22:34 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:22:34 localhost podman[70170]: 2025-12-05 08:22:34.256274766 +0000 UTC m=+0.065583873 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible) Dec 5 03:22:34 localhost podman[70140]: 2025-12-05 08:22:34.273746437 +0000 UTC m=+0.166339584 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 5 03:22:34 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:22:34 localhost podman[70170]: 2025-12-05 08:22:34.310717082 +0000 UTC m=+0.120026179 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:22:34 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:22:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:22:35 localhost podman[70216]: 2025-12-05 08:22:35.162729413 +0000 UTC m=+0.057779911 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:22:35 localhost podman[70216]: 2025-12-05 08:22:35.53967595 +0000 UTC m=+0.434726458 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 5 03:22:35 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:22:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:22:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:22:39 localhost podman[70242]: 2025-12-05 08:22:39.198489674 +0000 UTC m=+0.081486445 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:22:39 localhost podman[70243]: 2025-12-05 08:22:39.16735529 +0000 UTC m=+0.051756845 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 5 03:22:39 localhost podman[70242]: 2025-12-05 08:22:39.233352954 +0000 UTC m=+0.116349715 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:22:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:22:39 localhost podman[70243]: 2025-12-05 08:22:39.253580461 +0000 UTC m=+0.137981946 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4) Dec 5 03:22:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:22:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:22:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:22:40 localhost systemd[1]: tmp-crun.mQv21K.mount: Deactivated successfully. Dec 5 03:22:40 localhost podman[70289]: 2025-12-05 08:22:40.187625383 +0000 UTC m=+0.075624143 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible) Dec 5 03:22:40 localhost podman[70289]: 2025-12-05 08:22:40.247823708 +0000 UTC m=+0.135822508 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 5 03:22:40 localhost systemd[1]: tmp-crun.4clVb0.mount: Deactivated successfully. Dec 5 03:22:40 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:22:40 localhost podman[70288]: 2025-12-05 08:22:40.260112669 +0000 UTC m=+0.150170714 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12) Dec 5 03:22:40 localhost podman[70288]: 2025-12-05 08:22:40.296774984 +0000 UTC m=+0.186832979 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-type=git) Dec 5 03:22:40 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:22:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:22:46 localhost systemd[1]: tmp-crun.xsB6nm.mount: Deactivated successfully. Dec 5 03:22:46 localhost podman[70327]: 2025-12-05 08:22:46.199557126 +0000 UTC m=+0.092737454 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:22:46 localhost podman[70327]: 2025-12-05 08:22:46.441667276 +0000 UTC m=+0.334847604 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:22:46 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:22:49 localhost snmpd[66746]: empty variable list in _query Dec 5 03:22:49 localhost snmpd[66746]: empty variable list in _query Dec 5 03:23:04 localhost podman[70540]: Dec 5 03:23:04 localhost podman[70540]: 2025-12-05 08:23:04.138615648 +0000 UTC m=+0.078580505 container create 4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_cori, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-type=git, name=rhceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 03:23:04 localhost systemd[1]: Started libpod-conmon-4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb.scope. Dec 5 03:23:04 localhost podman[70540]: 2025-12-05 08:23:04.108489734 +0000 UTC m=+0.048454621 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 03:23:04 localhost systemd[1]: Started libcrun container. Dec 5 03:23:04 localhost podman[70540]: 2025-12-05 08:23:04.230770452 +0000 UTC m=+0.170735309 container init 4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_cori, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Dec 5 03:23:04 localhost podman[70540]: 2025-12-05 08:23:04.239123171 +0000 UTC m=+0.179088028 container start 4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_cori, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, ceph=True, release=1763362218, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:23:04 localhost podman[70540]: 2025-12-05 08:23:04.239450261 +0000 UTC m=+0.179415148 container attach 4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_cori, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64) Dec 5 03:23:04 localhost laughing_cori[70555]: 167 167 Dec 5 03:23:04 localhost systemd[1]: libpod-4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb.scope: Deactivated successfully. Dec 5 03:23:04 localhost podman[70540]: 2025-12-05 08:23:04.245432556 +0000 UTC m=+0.185397403 container died 4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_cori, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=1763362218, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:23:04 localhost podman[70561]: 2025-12-05 08:23:04.367449985 +0000 UTC m=+0.107886302 container remove 4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_cori, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 5 03:23:04 localhost systemd[1]: libpod-conmon-4bf758988f1da0e6d56a585d493276bfa8b2ab6dd6a5f7c6edf17c87603c6fdb.scope: Deactivated successfully. Dec 5 03:23:04 localhost podman[70574]: 2025-12-05 08:23:04.44573346 +0000 UTC m=+0.096437278 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:23:04 localhost podman[70574]: 2025-12-05 08:23:04.460859649 +0000 UTC m=+0.111563507 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron) Dec 5 03:23:04 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:23:04 localhost podman[70576]: 2025-12-05 08:23:04.54676814 +0000 UTC m=+0.188035286 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:23:04 localhost podman[70618]: Dec 5 03:23:04 localhost podman[70618]: 2025-12-05 08:23:04.605422207 +0000 UTC m=+0.097915425 container create 06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_mccarthy, io.openshift.expose-services=, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 03:23:04 localhost podman[70576]: 2025-12-05 08:23:04.621766653 +0000 UTC m=+0.263033749 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2025-11-19T00:11:48Z, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.expose-services=) Dec 5 03:23:04 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:23:04 localhost systemd[1]: Started libpod-conmon-06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978.scope. Dec 5 03:23:04 localhost podman[70575]: 2025-12-05 08:23:04.608878124 +0000 UTC m=+0.259091737 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12) Dec 5 03:23:04 localhost podman[70618]: 2025-12-05 08:23:04.567047339 +0000 UTC m=+0.059540597 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 03:23:04 localhost systemd[1]: Started libcrun container. Dec 5 03:23:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f996342701cf82cff507ac3394b549e6606346e4747aaf0ce8d009489ff36652/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 03:23:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f996342701cf82cff507ac3394b549e6606346e4747aaf0ce8d009489ff36652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:23:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f996342701cf82cff507ac3394b549e6606346e4747aaf0ce8d009489ff36652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 03:23:04 localhost podman[70618]: 2025-12-05 08:23:04.684202908 +0000 UTC m=+0.176696106 container init 06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_mccarthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, release=1763362218, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7) Dec 5 03:23:04 localhost podman[70575]: 2025-12-05 08:23:04.692929208 +0000 UTC m=+0.343142821 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64) Dec 5 03:23:04 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:23:04 localhost podman[70618]: 2025-12-05 08:23:04.745968261 +0000 UTC m=+0.238461459 container start 06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_mccarthy, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public) Dec 5 03:23:04 localhost podman[70618]: 2025-12-05 08:23:04.746566539 +0000 UTC m=+0.239059787 container attach 06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_mccarthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Dec 5 03:23:05 localhost systemd[1]: var-lib-containers-storage-overlay-f70bf33423b73bd6e9c4c05e5a2a0526ec71b217a7cfca7511f5f7038685ed99-merged.mount: Deactivated successfully. Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: [ Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: { Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "available": false, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "ceph_device": false, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "lsm_data": {}, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "lvs": [], Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "path": "/dev/sr0", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "rejected_reasons": [ Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "Insufficient space (<5GB)", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "Has a FileSystem" Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: ], Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "sys_api": { Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "actuators": null, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "device_nodes": "sr0", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "human_readable_size": "482.00 KB", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "id_bus": "ata", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "model": "QEMU DVD-ROM", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "nr_requests": "2", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "partitions": {}, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "path": "/dev/sr0", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "removable": "1", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "rev": "2.5+", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "ro": "0", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "rotational": "1", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "sas_address": "", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "sas_device_handle": "", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "scheduler_mode": "mq-deadline", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "sectors": 0, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "sectorsize": "2048", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "size": 493568.0, Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "support_discard": "0", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "type": "disk", Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: "vendor": "QEMU" Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: } Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: } Dec 5 03:23:05 localhost wonderful_mccarthy[70669]: ] Dec 5 03:23:05 localhost systemd[1]: libpod-06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978.scope: Deactivated successfully. Dec 5 03:23:05 localhost podman[70618]: 2025-12-05 08:23:05.609928562 +0000 UTC m=+1.102421790 container died 06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_mccarthy, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 03:23:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:23:05 localhost systemd[1]: tmp-crun.9GGYtP.mount: Deactivated successfully. Dec 5 03:23:05 localhost podman[72431]: 2025-12-05 08:23:05.704694278 +0000 UTC m=+0.076318985 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, container_name=nova_migration_target, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:23:05 localhost systemd[1]: var-lib-containers-storage-overlay-f996342701cf82cff507ac3394b549e6606346e4747aaf0ce8d009489ff36652-merged.mount: Deactivated successfully. Dec 5 03:23:05 localhost podman[72430]: 2025-12-05 08:23:05.729706573 +0000 UTC m=+0.105902172 container remove 06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_mccarthy, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=1763362218, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 03:23:05 localhost systemd[1]: libpod-conmon-06bb59670d957a0e319c64144b1f268c5cb8be37457308f0ffa2aebc282e0978.scope: Deactivated successfully. Dec 5 03:23:06 localhost podman[72431]: 2025-12-05 08:23:06.096122993 +0000 UTC m=+0.467747750 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:23:06 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:23:10 localhost systemd[1]: tmp-crun.p3KnvS.mount: Deactivated successfully. Dec 5 03:23:10 localhost podman[72481]: 2025-12-05 08:23:10.227015949 +0000 UTC m=+0.111133824 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:23:10 localhost podman[72480]: 2025-12-05 08:23:10.190735755 +0000 UTC m=+0.079727910 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 5 03:23:10 localhost podman[72481]: 2025-12-05 08:23:10.247578095 +0000 UTC m=+0.131695900 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:23:10 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:23:10 localhost podman[72480]: 2025-12-05 08:23:10.274672895 +0000 UTC m=+0.163665100 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 5 03:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:23:10 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:23:10 localhost podman[72528]: 2025-12-05 08:23:10.345010764 +0000 UTC m=+0.062281080 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:23:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:23:10 localhost podman[72528]: 2025-12-05 08:23:10.378178171 +0000 UTC m=+0.095448507 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 5 03:23:10 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:23:10 localhost podman[72546]: 2025-12-05 08:23:10.449412018 +0000 UTC m=+0.071515296 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1) Dec 5 03:23:10 localhost podman[72546]: 2025-12-05 08:23:10.4636888 +0000 UTC m=+0.085792118 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Dec 5 03:23:10 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:23:11 localhost systemd[1]: tmp-crun.8XmIVl.mount: Deactivated successfully. Dec 5 03:23:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:23:17 localhost podman[72565]: 2025-12-05 08:23:17.199092712 +0000 UTC m=+0.087317285 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:23:17 localhost podman[72565]: 2025-12-05 08:23:17.4269367 +0000 UTC m=+0.315161293 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-type=git) Dec 5 03:23:17 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:23:31 localhost sshd[72596]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:23:35 localhost systemd[1]: tmp-crun.7R5Q4S.mount: Deactivated successfully. Dec 5 03:23:35 localhost podman[72598]: 2025-12-05 08:23:35.179279416 +0000 UTC m=+0.071364331 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, release=1761123044, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:23:35 localhost podman[72600]: 2025-12-05 08:23:35.23009554 +0000 UTC m=+0.119103320 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:23:35 localhost podman[72598]: 2025-12-05 08:23:35.238568612 +0000 UTC m=+0.130653517 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 5 03:23:35 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:23:35 localhost podman[72599]: 2025-12-05 08:23:35.200328448 +0000 UTC m=+0.088313127 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 5 03:23:35 localhost podman[72600]: 2025-12-05 08:23:35.280380757 +0000 UTC m=+0.169388537 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:23:35 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:23:35 localhost podman[72599]: 2025-12-05 08:23:35.331568744 +0000 UTC m=+0.219553413 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible) Dec 5 03:23:35 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:23:36 localhost systemd[1]: tmp-crun.0gl2Ss.mount: Deactivated successfully. Dec 5 03:23:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:23:36 localhost podman[72666]: 2025-12-05 08:23:36.28543096 +0000 UTC m=+0.085878392 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:23:36 localhost podman[72666]: 2025-12-05 08:23:36.65897444 +0000 UTC m=+0.459421862 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, release=1761123044) Dec 5 03:23:36 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:23:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:23:41 localhost podman[72692]: 2025-12-05 08:23:41.216101809 +0000 UTC m=+0.092413554 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, tcib_managed=true) Dec 5 03:23:41 localhost podman[72690]: 2025-12-05 08:23:41.189897878 +0000 UTC m=+0.078064310 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:23:41 localhost podman[72698]: 2025-12-05 08:23:41.258583925 +0000 UTC m=+0.136878172 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Dec 5 03:23:41 localhost podman[72690]: 2025-12-05 08:23:41.268651597 +0000 UTC m=+0.156818039 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 03:23:41 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:23:41 localhost podman[72691]: 2025-12-05 08:23:41.311738271 +0000 UTC m=+0.193809475 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, distribution-scope=public, release=1761123044) Dec 5 03:23:41 localhost podman[72691]: 2025-12-05 08:23:41.32460648 +0000 UTC m=+0.206677634 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z) Dec 5 03:23:41 localhost podman[72692]: 2025-12-05 08:23:41.332228367 +0000 UTC m=+0.208540112 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git) Dec 5 03:23:41 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:23:41 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:23:41 localhost podman[72698]: 2025-12-05 08:23:41.380787591 +0000 UTC m=+0.259081818 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044) Dec 5 03:23:41 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:23:42 localhost systemd[1]: tmp-crun.wjny1N.mount: Deactivated successfully. Dec 5 03:23:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:23:48 localhost podman[72776]: 2025-12-05 08:23:48.200851375 +0000 UTC m=+0.085305274 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com) Dec 5 03:23:48 localhost podman[72776]: 2025-12-05 08:23:48.427030561 +0000 UTC m=+0.311484390 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:23:48 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:24:06 localhost podman[72806]: 2025-12-05 08:24:06.22913703 +0000 UTC m=+0.088321817 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 5 03:24:06 localhost podman[72806]: 2025-12-05 08:24:06.280903453 +0000 UTC m=+0.140088290 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044) Dec 5 03:24:06 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:24:06 localhost systemd[1]: tmp-crun.wE6dIC.mount: Deactivated successfully. Dec 5 03:24:06 localhost podman[72807]: 2025-12-05 08:24:06.283126032 +0000 UTC m=+0.139046737 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, version=17.1.12, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true) Dec 5 03:24:06 localhost podman[72807]: 2025-12-05 08:24:06.370755966 +0000 UTC m=+0.226676611 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, container_name=logrotate_crond, release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:24:06 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:24:06 localhost podman[72808]: 2025-12-05 08:24:06.338094045 +0000 UTC m=+0.191070489 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com) Dec 5 03:24:06 localhost podman[72808]: 2025-12-05 08:24:06.4173395 +0000 UTC m=+0.270315904 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:24:06 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:24:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:24:07 localhost podman[72940]: 2025-12-05 08:24:07.173217514 +0000 UTC m=+0.063017744 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 5 03:24:07 localhost podman[72940]: 2025-12-05 08:24:07.523438542 +0000 UTC m=+0.413238822 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:24:07 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:24:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:24:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:24:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:24:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:24:12 localhost podman[72980]: 2025-12-05 08:24:12.197832174 +0000 UTC m=+0.079198745 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Dec 5 03:24:12 localhost podman[72980]: 2025-12-05 08:24:12.235859941 +0000 UTC m=+0.117226432 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, container_name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:24:12 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:24:12 localhost podman[72978]: 2025-12-05 08:24:12.244319564 +0000 UTC m=+0.129700839 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:24:12 localhost podman[72979]: 2025-12-05 08:24:12.301635199 +0000 UTC m=+0.182954498 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Dec 5 03:24:12 localhost podman[72979]: 2025-12-05 08:24:12.308955596 +0000 UTC m=+0.190274875 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 5 03:24:12 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:24:12 localhost podman[72981]: 2025-12-05 08:24:12.364468815 +0000 UTC m=+0.239754058 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:24:12 localhost podman[72978]: 2025-12-05 08:24:12.378797799 +0000 UTC m=+0.264179044 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public) Dec 5 03:24:12 localhost podman[72981]: 2025-12-05 08:24:12.387396755 +0000 UTC m=+0.262681938 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:24:12 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:24:12 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:24:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:24:19 localhost podman[73064]: 2025-12-05 08:24:19.191279499 +0000 UTC m=+0.072141235 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1) Dec 5 03:24:19 localhost podman[73064]: 2025-12-05 08:24:19.434076981 +0000 UTC m=+0.314938707 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, tcib_managed=true, config_id=tripleo_step1, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:24:19 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:24:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:24:37 localhost recover_tripleo_nova_virtqemud[73112]: 61294 Dec 5 03:24:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:24:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:24:37 localhost systemd[1]: tmp-crun.Yl2WqH.mount: Deactivated successfully. Dec 5 03:24:37 localhost podman[73092]: 2025-12-05 08:24:37.215511319 +0000 UTC m=+0.100095522 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:24:37 localhost podman[73092]: 2025-12-05 08:24:37.235687754 +0000 UTC m=+0.120271967 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 03:24:37 localhost podman[73094]: 2025-12-05 08:24:37.191828676 +0000 UTC m=+0.070693712 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Dec 5 03:24:37 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:24:37 localhost podman[73094]: 2025-12-05 08:24:37.27171221 +0000 UTC m=+0.150577246 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:24:37 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:24:37 localhost podman[73093]: 2025-12-05 08:24:37.31368047 +0000 UTC m=+0.195305392 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:24:37 localhost podman[73093]: 2025-12-05 08:24:37.321372368 +0000 UTC m=+0.202997240 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:24:37 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:24:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:24:38 localhost systemd[1]: tmp-crun.o2zK8A.mount: Deactivated successfully. Dec 5 03:24:38 localhost podman[73163]: 2025-12-05 08:24:38.210331664 +0000 UTC m=+0.096047366 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:24:38 localhost podman[73163]: 2025-12-05 08:24:38.561030907 +0000 UTC m=+0.446746569 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 5 03:24:38 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:24:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:24:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:24:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:24:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:24:43 localhost systemd[1]: tmp-crun.6Qez0R.mount: Deactivated successfully. Dec 5 03:24:43 localhost systemd[1]: tmp-crun.M16o6J.mount: Deactivated successfully. Dec 5 03:24:43 localhost podman[73190]: 2025-12-05 08:24:43.259782704 +0000 UTC m=+0.140659969 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, vcs-type=git, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044) Dec 5 03:24:43 localhost podman[73190]: 2025-12-05 08:24:43.305718617 +0000 UTC m=+0.186595872 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.openshift.expose-services=) Dec 5 03:24:43 localhost podman[73187]: 2025-12-05 08:24:43.313583201 +0000 UTC m=+0.202875117 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:24:43 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:24:43 localhost podman[73188]: 2025-12-05 08:24:43.239076072 +0000 UTC m=+0.126889681 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, tcib_managed=true, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 5 03:24:43 localhost podman[73189]: 2025-12-05 08:24:43.353040313 +0000 UTC m=+0.237926102 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, batch=17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:51:28Z, tcib_managed=true) Dec 5 03:24:43 localhost podman[73189]: 2025-12-05 08:24:43.368520332 +0000 UTC m=+0.253406071 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public) Dec 5 03:24:43 localhost podman[73188]: 2025-12-05 08:24:43.368830291 +0000 UTC m=+0.256643860 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:24:43 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:24:43 localhost podman[73187]: 2025-12-05 08:24:43.407066116 +0000 UTC m=+0.296358032 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:24:43 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:24:43 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:24:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:24:50 localhost podman[73270]: 2025-12-05 08:24:50.189661691 +0000 UTC m=+0.081691001 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:24:50 localhost podman[73270]: 2025-12-05 08:24:50.378718908 +0000 UTC m=+0.270748158 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 5 03:24:50 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:25:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:25:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:25:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:25:07 localhost podman[73348]: 2025-12-05 08:25:07.521258141 +0000 UTC m=+0.085866236 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vcs-type=git, name=rhosp17/openstack-cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Dec 5 03:25:07 localhost podman[73348]: 2025-12-05 08:25:07.52737649 +0000 UTC m=+0.091984565 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond) Dec 5 03:25:07 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:25:07 localhost podman[73346]: 2025-12-05 08:25:07.582825778 +0000 UTC m=+0.146832124 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:25:07 localhost podman[73346]: 2025-12-05 08:25:07.603276178 +0000 UTC m=+0.167282514 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:25:07 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:25:07 localhost python3[73347]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:07 localhost podman[73349]: 2025-12-05 08:25:07.675664138 +0000 UTC m=+0.239669654 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:25:07 localhost podman[73349]: 2025-12-05 08:25:07.732643344 +0000 UTC m=+0.296648800 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:25:07 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:25:07 localhost python3[73461]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923107.2267482-113387-35793455268340/source _original_basename=tmprnjue_83 follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:25:08 localhost podman[73550]: 2025-12-05 08:25:08.756903947 +0000 UTC m=+0.135134544 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:25:08 localhost podman[73614]: 2025-12-05 08:25:08.910885721 +0000 UTC m=+0.098563847 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.41.4, name=rhceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 5 03:25:08 localhost python3[73604]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:25:09 localhost podman[73614]: 2025-12-05 08:25:09.016950218 +0000 UTC m=+0.204628384 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_BRANCH=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Dec 5 03:25:09 localhost podman[73550]: 2025-12-05 08:25:09.124925045 +0000 UTC m=+0.503155632 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:25:09 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:25:10 localhost ansible-async_wrapper.py[73914]: Invoked with 82211110734 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923110.037758-113816-29674247053976/AnsiballZ_command.py _ Dec 5 03:25:10 localhost ansible-async_wrapper.py[73925]: Starting module and watcher Dec 5 03:25:10 localhost ansible-async_wrapper.py[73925]: Start watching 73926 (3600) Dec 5 03:25:10 localhost ansible-async_wrapper.py[73926]: Start module (73926) Dec 5 03:25:10 localhost ansible-async_wrapper.py[73914]: Return async_wrapper task started. Dec 5 03:25:10 localhost python3[73953]: ansible-ansible.legacy.async_status Invoked with jid=82211110734.73914 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:25:13 localhost podman[74012]: 2025-12-05 08:25:13.458020732 +0000 UTC m=+0.091388076 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:25:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:25:13 localhost podman[74012]: 2025-12-05 08:25:13.482159926 +0000 UTC m=+0.115527290 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, release=1761123044, container_name=ovn_controller) Dec 5 03:25:13 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:25:13 localhost podman[74088]: 2025-12-05 08:25:13.549347916 +0000 UTC m=+0.065176279 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:25:13 localhost podman[74092]: 2025-12-05 08:25:13.604684911 +0000 UTC m=+0.119906496 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, container_name=iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 5 03:25:13 localhost podman[74088]: 2025-12-05 08:25:13.61246981 +0000 UTC m=+0.128298153 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:25:13 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:25:13 localhost podman[74092]: 2025-12-05 08:25:13.637573113 +0000 UTC m=+0.152794598 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, container_name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:25:13 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:25:13 localhost podman[74093]: 2025-12-05 08:25:13.675404849 +0000 UTC m=+0.184961529 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 5 03:25:13 localhost podman[74093]: 2025-12-05 08:25:13.688547504 +0000 UTC m=+0.198104154 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:25:13 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:25:14 localhost puppet-user[73944]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 5 03:25:14 localhost puppet-user[73944]: (file: /etc/puppet/hiera.yaml) Dec 5 03:25:14 localhost puppet-user[73944]: Warning: Undefined variable '::deploy_config_name'; Dec 5 03:25:14 localhost puppet-user[73944]: (file & line not available) Dec 5 03:25:14 localhost puppet-user[73944]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 5 03:25:14 localhost puppet-user[73944]: (file & line not available) Dec 5 03:25:14 localhost puppet-user[73944]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 5 03:25:14 localhost puppet-user[73944]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:25:14 localhost puppet-user[73944]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:25:14 localhost puppet-user[73944]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:25:14 localhost puppet-user[73944]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:25:14 localhost puppet-user[73944]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:25:14 localhost puppet-user[73944]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:25:14 localhost puppet-user[73944]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:25:14 localhost puppet-user[73944]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:25:14 localhost puppet-user[73944]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:25:14 localhost puppet-user[73944]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:25:14 localhost puppet-user[73944]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:25:14 localhost puppet-user[73944]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:25:14 localhost puppet-user[73944]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:25:14 localhost puppet-user[73944]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:25:14 localhost puppet-user[73944]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 5 03:25:14 localhost puppet-user[73944]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 5 03:25:14 localhost puppet-user[73944]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 5 03:25:14 localhost puppet-user[73944]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 5 03:25:14 localhost puppet-user[73944]: Notice: Compiled catalog for np0005546419.localdomain in environment production in 0.22 seconds Dec 5 03:25:15 localhost puppet-user[73944]: Notice: Applied catalog in 0.30 seconds Dec 5 03:25:15 localhost puppet-user[73944]: Application: Dec 5 03:25:15 localhost puppet-user[73944]: Initial environment: production Dec 5 03:25:15 localhost puppet-user[73944]: Converged environment: production Dec 5 03:25:15 localhost puppet-user[73944]: Run mode: user Dec 5 03:25:15 localhost puppet-user[73944]: Changes: Dec 5 03:25:15 localhost puppet-user[73944]: Events: Dec 5 03:25:15 localhost puppet-user[73944]: Resources: Dec 5 03:25:15 localhost puppet-user[73944]: Total: 19 Dec 5 03:25:15 localhost puppet-user[73944]: Time: Dec 5 03:25:15 localhost puppet-user[73944]: Schedule: 0.00 Dec 5 03:25:15 localhost puppet-user[73944]: Package: 0.00 Dec 5 03:25:15 localhost puppet-user[73944]: Augeas: 0.01 Dec 5 03:25:15 localhost puppet-user[73944]: Exec: 0.01 Dec 5 03:25:15 localhost puppet-user[73944]: File: 0.02 Dec 5 03:25:15 localhost puppet-user[73944]: Service: 0.07 Dec 5 03:25:15 localhost puppet-user[73944]: Config retrieval: 0.28 Dec 5 03:25:15 localhost puppet-user[73944]: Transaction evaluation: 0.29 Dec 5 03:25:15 localhost puppet-user[73944]: Catalog application: 0.30 Dec 5 03:25:15 localhost puppet-user[73944]: Last run: 1764923115 Dec 5 03:25:15 localhost puppet-user[73944]: Filebucket: 0.00 Dec 5 03:25:15 localhost puppet-user[73944]: Total: 0.31 Dec 5 03:25:15 localhost puppet-user[73944]: Version: Dec 5 03:25:15 localhost puppet-user[73944]: Config: 1764923114 Dec 5 03:25:15 localhost puppet-user[73944]: Puppet: 7.10.0 Dec 5 03:25:15 localhost ansible-async_wrapper.py[73926]: Module complete (73926) Dec 5 03:25:15 localhost ansible-async_wrapper.py[73925]: Done in kid B. Dec 5 03:25:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:25:21 localhost podman[74180]: 2025-12-05 08:25:21.212228342 +0000 UTC m=+0.090529639 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64) Dec 5 03:25:21 localhost python3[74186]: ansible-ansible.legacy.async_status Invoked with jid=82211110734.73914 mode=status _async_dir=/tmp/.ansible_async Dec 5 03:25:21 localhost podman[74180]: 2025-12-05 08:25:21.411693407 +0000 UTC m=+0.289994704 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, release=1761123044, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:25:21 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:25:22 localhost python3[74225]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:25:22 localhost python3[74241]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:25:22 localhost python3[74291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:23 localhost python3[74309]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpbwju2aw5 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 5 03:25:23 localhost python3[74339]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:24 localhost python3[74444]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 5 03:25:25 localhost python3[74463]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:26 localhost python3[74495]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:25:26 localhost python3[74545]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:27 localhost python3[74563]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:27 localhost python3[74625]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:27 localhost python3[74643]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:28 localhost python3[74705]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:28 localhost python3[74723]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:29 localhost python3[74785]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:29 localhost python3[74803]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:29 localhost python3[74833]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:25:29 localhost systemd[1]: Reloading. Dec 5 03:25:30 localhost systemd-rc-local-generator[74856]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:25:30 localhost systemd-sysv-generator[74859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:25:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:25:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:25:30 localhost recover_tripleo_nova_virtqemud[74872]: 61294 Dec 5 03:25:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:25:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:25:30 localhost python3[74921]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:31 localhost python3[74939]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:31 localhost python3[75001]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 5 03:25:31 localhost python3[75019]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:25:32 localhost python3[75049]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:25:32 localhost systemd[1]: Reloading. Dec 5 03:25:32 localhost systemd-rc-local-generator[75073]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:25:32 localhost systemd-sysv-generator[75079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:25:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:25:32 localhost systemd[1]: Starting Create netns directory... Dec 5 03:25:32 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 03:25:32 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 03:25:32 localhost systemd[1]: Finished Create netns directory. Dec 5 03:25:33 localhost python3[75107]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 5 03:25:35 localhost python3[75165]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 5 03:25:36 localhost podman[75204]: 2025-12-05 08:25:36.044937605 +0000 UTC m=+0.092188931 container create 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step5, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc.) Dec 5 03:25:36 localhost systemd[1]: Started libpod-conmon-34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.scope. Dec 5 03:25:36 localhost podman[75204]: 2025-12-05 08:25:35.996003048 +0000 UTC m=+0.043254404 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:25:36 localhost systemd[1]: Started libcrun container. Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb409f27dad615b0d7144028021b8876255f5931d4ff2baaeaea67c9909ae2e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb409f27dad615b0d7144028021b8876255f5931d4ff2baaeaea67c9909ae2e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb409f27dad615b0d7144028021b8876255f5931d4ff2baaeaea67c9909ae2e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb409f27dad615b0d7144028021b8876255f5931d4ff2baaeaea67c9909ae2e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcb409f27dad615b0d7144028021b8876255f5931d4ff2baaeaea67c9909ae2e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:25:36 localhost podman[75204]: 2025-12-05 08:25:36.14182465 +0000 UTC m=+0.189075976 container init 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, container_name=nova_compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:25:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:25:36 localhost podman[75204]: 2025-12-05 08:25:36.176748565 +0000 UTC m=+0.223999931 container start 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public) Dec 5 03:25:36 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:25:36 localhost python3[75165]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:25:36 localhost systemd[1]: Created slice User Slice of UID 0. Dec 5 03:25:36 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 5 03:25:36 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 5 03:25:36 localhost systemd[1]: Starting User Manager for UID 0... Dec 5 03:25:36 localhost podman[75225]: 2025-12-05 08:25:36.34828753 +0000 UTC m=+0.161703662 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:25:36 localhost systemd[75241]: Queued start job for default target Main User Target. Dec 5 03:25:36 localhost systemd[75241]: Created slice User Application Slice. Dec 5 03:25:36 localhost systemd[75241]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 5 03:25:36 localhost systemd[75241]: Started Daily Cleanup of User's Temporary Directories. Dec 5 03:25:36 localhost systemd[75241]: Reached target Paths. Dec 5 03:25:36 localhost systemd[75241]: Reached target Timers. Dec 5 03:25:36 localhost systemd[75241]: Starting D-Bus User Message Bus Socket... Dec 5 03:25:36 localhost systemd[75241]: Starting Create User's Volatile Files and Directories... Dec 5 03:25:36 localhost systemd[75241]: Finished Create User's Volatile Files and Directories. Dec 5 03:25:36 localhost systemd[75241]: Listening on D-Bus User Message Bus Socket. Dec 5 03:25:36 localhost systemd[75241]: Reached target Sockets. Dec 5 03:25:36 localhost systemd[75241]: Reached target Basic System. Dec 5 03:25:36 localhost systemd[75241]: Reached target Main User Target. Dec 5 03:25:36 localhost systemd[75241]: Startup finished in 154ms. Dec 5 03:25:36 localhost systemd[1]: Started User Manager for UID 0. Dec 5 03:25:36 localhost systemd[1]: Started Session c10 of User root. Dec 5 03:25:36 localhost podman[75225]: 2025-12-05 08:25:36.429629446 +0000 UTC m=+0.243045578 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:25:36 localhost podman[75225]: unhealthy Dec 5 03:25:36 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:25:36 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 03:25:36 localhost systemd[1]: session-c10.scope: Deactivated successfully. Dec 5 03:25:36 localhost podman[75322]: 2025-12-05 08:25:36.666278166 +0000 UTC m=+0.063813077 container create 5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, distribution-scope=public, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044) Dec 5 03:25:36 localhost systemd[1]: Started libpod-conmon-5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4.scope. Dec 5 03:25:36 localhost systemd[1]: Started libcrun container. Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3938ea501b837fe2e2a8057ca94c3f96e13f9f2be2e34549b5d3eefb9b400ac/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3938ea501b837fe2e2a8057ca94c3f96e13f9f2be2e34549b5d3eefb9b400ac/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 5 03:25:36 localhost podman[75322]: 2025-12-05 08:25:36.728663318 +0000 UTC m=+0.126198259 container init 5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1) Dec 5 03:25:36 localhost podman[75322]: 2025-12-05 08:25:36.738194991 +0000 UTC m=+0.135729902 container start 5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, tcib_managed=true, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:25:36 localhost podman[75322]: 2025-12-05 08:25:36.738345336 +0000 UTC m=+0.135880327 container attach 5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 5 03:25:36 localhost podman[75322]: 2025-12-05 08:25:36.640498032 +0000 UTC m=+0.038032973 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:25:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:25:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:25:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:25:38 localhost podman[75346]: 2025-12-05 08:25:38.207112784 +0000 UTC m=+0.089413386 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=logrotate_crond, managed_by=tripleo_ansible) Dec 5 03:25:38 localhost podman[75346]: 2025-12-05 08:25:38.219682471 +0000 UTC m=+0.101983073 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 5 03:25:38 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:25:38 localhost podman[75345]: 2025-12-05 08:25:38.311802029 +0000 UTC m=+0.193059599 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute) Dec 5 03:25:38 localhost podman[75345]: 2025-12-05 08:25:38.342647579 +0000 UTC m=+0.223905189 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:25:38 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:25:38 localhost podman[75347]: 2025-12-05 08:25:38.365951478 +0000 UTC m=+0.241733089 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:25:38 localhost podman[75347]: 2025-12-05 08:25:38.417110863 +0000 UTC m=+0.292892434 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 03:25:38 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:25:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:25:40 localhost systemd[1]: tmp-crun.MX0miF.mount: Deactivated successfully. Dec 5 03:25:40 localhost podman[75415]: 2025-12-05 08:25:40.181778227 +0000 UTC m=+0.073343831 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Dec 5 03:25:40 localhost podman[75415]: 2025-12-05 08:25:40.55359386 +0000 UTC m=+0.445159434 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:25:40 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:25:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:25:44 localhost systemd[1]: tmp-crun.zAWgqw.mount: Deactivated successfully. Dec 5 03:25:44 localhost podman[75438]: 2025-12-05 08:25:44.209631021 +0000 UTC m=+0.097205596 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, container_name=iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 5 03:25:44 localhost podman[75437]: 2025-12-05 08:25:44.249601822 +0000 UTC m=+0.138817338 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 03:25:44 localhost podman[75438]: 2025-12-05 08:25:44.296627271 +0000 UTC m=+0.184201846 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z) Dec 5 03:25:44 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:25:44 localhost podman[75439]: 2025-12-05 08:25:44.343389111 +0000 UTC m=+0.228005665 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:25:44 localhost podman[75440]: 2025-12-05 08:25:44.299194859 +0000 UTC m=+0.181396159 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:25:44 localhost podman[75439]: 2025-12-05 08:25:44.355716181 +0000 UTC m=+0.240332755 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:25:44 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:25:44 localhost podman[75437]: 2025-12-05 08:25:44.370010451 +0000 UTC m=+0.259225997 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:25:44 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:25:44 localhost podman[75440]: 2025-12-05 08:25:44.382811755 +0000 UTC m=+0.265013055 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller) Dec 5 03:25:44 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:25:46 localhost systemd[1]: Stopping User Manager for UID 0... Dec 5 03:25:46 localhost systemd[75241]: Activating special unit Exit the Session... Dec 5 03:25:46 localhost systemd[75241]: Stopped target Main User Target. Dec 5 03:25:46 localhost systemd[75241]: Stopped target Basic System. Dec 5 03:25:46 localhost systemd[75241]: Stopped target Paths. Dec 5 03:25:46 localhost systemd[75241]: Stopped target Sockets. Dec 5 03:25:46 localhost systemd[75241]: Stopped target Timers. Dec 5 03:25:46 localhost systemd[75241]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 03:25:46 localhost systemd[75241]: Closed D-Bus User Message Bus Socket. Dec 5 03:25:46 localhost systemd[75241]: Stopped Create User's Volatile Files and Directories. Dec 5 03:25:46 localhost systemd[75241]: Removed slice User Application Slice. Dec 5 03:25:46 localhost systemd[75241]: Reached target Shutdown. Dec 5 03:25:46 localhost systemd[75241]: Finished Exit the Session. Dec 5 03:25:46 localhost systemd[75241]: Reached target Exit the Session. Dec 5 03:25:46 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 5 03:25:46 localhost systemd[1]: Stopped User Manager for UID 0. Dec 5 03:25:46 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 5 03:25:46 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 5 03:25:46 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 5 03:25:46 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 5 03:25:46 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 5 03:25:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:25:52 localhost podman[75522]: 2025-12-05 08:25:52.175505152 +0000 UTC m=+0.070512723 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 5 03:25:52 localhost podman[75522]: 2025-12-05 08:25:52.388793392 +0000 UTC m=+0.283800973 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 03:25:52 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:26:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:26:07 localhost podman[75551]: 2025-12-05 08:26:07.190871494 +0000 UTC m=+0.078166379 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 5 03:26:07 localhost podman[75551]: 2025-12-05 08:26:07.233645552 +0000 UTC m=+0.120940467 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_id=tripleo_step5, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:26:07 localhost podman[75551]: unhealthy Dec 5 03:26:07 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:26:07 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 03:26:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:26:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:26:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:26:09 localhost systemd[1]: tmp-crun.v0KMsW.mount: Deactivated successfully. Dec 5 03:26:09 localhost podman[75575]: 2025-12-05 08:26:09.201530637 +0000 UTC m=+0.082530845 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:26:09 localhost podman[75575]: 2025-12-05 08:26:09.222063629 +0000 UTC m=+0.103063857 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Dec 5 03:26:09 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:26:09 localhost podman[75574]: 2025-12-05 08:26:09.297125741 +0000 UTC m=+0.177922742 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, tcib_managed=true, container_name=logrotate_crond, name=rhosp17/openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:26:09 localhost podman[75574]: 2025-12-05 08:26:09.331566582 +0000 UTC m=+0.212363513 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, container_name=logrotate_crond, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:26:09 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:26:09 localhost podman[75573]: 2025-12-05 08:26:09.349393582 +0000 UTC m=+0.234107884 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 03:26:09 localhost podman[75573]: 2025-12-05 08:26:09.401673062 +0000 UTC m=+0.286387314 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute) Dec 5 03:26:09 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:26:11 localhost podman[75658]: 2025-12-05 08:26:11.011561367 +0000 UTC m=+0.085113624 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:26:11 localhost podman[75658]: 2025-12-05 08:26:11.35187415 +0000 UTC m=+0.425426457 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:26:11 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:26:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:26:15 localhost systemd[1]: tmp-crun.VaQNLd.mount: Deactivated successfully. Dec 5 03:26:15 localhost systemd[1]: tmp-crun.GXPoZW.mount: Deactivated successfully. Dec 5 03:26:15 localhost podman[75743]: 2025-12-05 08:26:15.177299019 +0000 UTC m=+0.067460989 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step3, container_name=collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:26:15 localhost podman[75741]: 2025-12-05 08:26:15.240377382 +0000 UTC m=+0.129361766 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 5 03:26:15 localhost podman[75743]: 2025-12-05 08:26:15.256989884 +0000 UTC m=+0.147151904 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3) Dec 5 03:26:15 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:26:15 localhost podman[75744]: 2025-12-05 08:26:15.30719325 +0000 UTC m=+0.191386876 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, architecture=x86_64) Dec 5 03:26:15 localhost podman[75742]: 2025-12-05 08:26:15.209543572 +0000 UTC m=+0.098696031 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, url=https://www.redhat.com, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:26:15 localhost podman[75741]: 2025-12-05 08:26:15.315393413 +0000 UTC m=+0.204377767 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, tcib_managed=true, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:26:15 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:26:15 localhost podman[75744]: 2025-12-05 08:26:15.335622956 +0000 UTC m=+0.219816572 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:26:15 localhost podman[75742]: 2025-12-05 08:26:15.34063425 +0000 UTC m=+0.229786689 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:26:15 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:26:15 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:26:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:26:23 localhost podman[75827]: 2025-12-05 08:26:23.180029525 +0000 UTC m=+0.067547273 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:46Z) Dec 5 03:26:23 localhost podman[75827]: 2025-12-05 08:26:23.380751858 +0000 UTC m=+0.268269666 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:26:23 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:26:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:26:38 localhost systemd[1]: tmp-crun.J8ryVQ.mount: Deactivated successfully. Dec 5 03:26:38 localhost podman[75857]: 2025-12-05 08:26:38.204107184 +0000 UTC m=+0.090442058 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:26:38 localhost podman[75857]: 2025-12-05 08:26:38.265832895 +0000 UTC m=+0.152167809 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4) Dec 5 03:26:38 localhost podman[75857]: unhealthy Dec 5 03:26:38 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:26:38 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 03:26:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:26:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:26:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:26:40 localhost systemd[1]: tmp-crun.KvBoac.mount: Deactivated successfully. Dec 5 03:26:40 localhost podman[75880]: 2025-12-05 08:26:40.195522183 +0000 UTC m=+0.085985851 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 03:26:40 localhost podman[75880]: 2025-12-05 08:26:40.233616175 +0000 UTC m=+0.124079853 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:11:48Z) Dec 5 03:26:40 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:26:40 localhost podman[75882]: 2025-12-05 08:26:40.246861144 +0000 UTC m=+0.133894086 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:26:40 localhost podman[75881]: 2025-12-05 08:26:40.291832129 +0000 UTC m=+0.177232021 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:26:40 localhost podman[75882]: 2025-12-05 08:26:40.298601797 +0000 UTC m=+0.185634749 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:26:40 localhost podman[75881]: 2025-12-05 08:26:40.304785889 +0000 UTC m=+0.190185801 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:49:32Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Dec 5 03:26:40 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:26:40 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:26:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:26:42 localhost systemd[1]: tmp-crun.PtF4Bh.mount: Deactivated successfully. Dec 5 03:26:42 localhost podman[75953]: 2025-12-05 08:26:42.201868891 +0000 UTC m=+0.086972170 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:26:42 localhost podman[75953]: 2025-12-05 08:26:42.573890132 +0000 UTC m=+0.458993361 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Dec 5 03:26:42 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:26:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:26:46 localhost podman[75978]: 2025-12-05 08:26:46.187959829 +0000 UTC m=+0.073105663 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step3, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-type=git, distribution-scope=public, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc.) Dec 5 03:26:46 localhost podman[75978]: 2025-12-05 08:26:46.223659718 +0000 UTC m=+0.108805532 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, release=1761123044) Dec 5 03:26:46 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:26:46 localhost podman[75977]: 2025-12-05 08:26:46.238846116 +0000 UTC m=+0.126878110 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 5 03:26:46 localhost systemd[1]: tmp-crun.ZPIIs8.mount: Deactivated successfully. Dec 5 03:26:46 localhost podman[75985]: 2025-12-05 08:26:46.296234814 +0000 UTC m=+0.171012900 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:26:46 localhost podman[75977]: 2025-12-05 08:26:46.301093173 +0000 UTC m=+0.189125187 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 5 03:26:46 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:26:46 localhost podman[75985]: 2025-12-05 08:26:46.315502587 +0000 UTC m=+0.190280673 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=) Dec 5 03:26:46 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:26:46 localhost podman[75979]: 2025-12-05 08:26:46.367441788 +0000 UTC m=+0.247671542 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Dec 5 03:26:46 localhost podman[75979]: 2025-12-05 08:26:46.399113643 +0000 UTC m=+0.279343327 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, version=17.1.12) Dec 5 03:26:46 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:26:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:26:54 localhost podman[76061]: 2025-12-05 08:26:54.204107268 +0000 UTC m=+0.090745186 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, architecture=x86_64) Dec 5 03:26:54 localhost podman[76061]: 2025-12-05 08:26:54.409091203 +0000 UTC m=+0.295729071 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=metrics_qdr, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:26:54 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:27:03 localhost systemd[1]: session-27.scope: Deactivated successfully. Dec 5 03:27:03 localhost systemd[1]: session-27.scope: Consumed 2.904s CPU time. Dec 5 03:27:03 localhost systemd-logind[760]: Session 27 logged out. Waiting for processes to exit. Dec 5 03:27:03 localhost systemd-logind[760]: Removed session 27. Dec 5 03:27:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:27:09 localhost podman[76090]: 2025-12-05 08:27:09.175851135 +0000 UTC m=+0.064902400 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:27:09 localhost podman[76090]: 2025-12-05 08:27:09.260784682 +0000 UTC m=+0.149835917 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4) Dec 5 03:27:09 localhost podman[76090]: unhealthy Dec 5 03:27:09 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:27:09 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 03:27:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:27:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:27:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:27:11 localhost podman[76115]: 2025-12-05 08:27:11.188792527 +0000 UTC m=+0.078086427 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:27:11 localhost podman[76115]: 2025-12-05 08:27:11.240623304 +0000 UTC m=+0.129917204 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc.) Dec 5 03:27:11 localhost podman[76114]: 2025-12-05 08:27:11.2408159 +0000 UTC m=+0.129483070 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:27:11 localhost podman[76114]: 2025-12-05 08:27:11.252531771 +0000 UTC m=+0.141198931 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:32Z, vcs-type=git, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 5 03:27:11 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:27:11 localhost podman[76113]: 2025-12-05 08:27:11.298025202 +0000 UTC m=+0.189806338 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:27:11 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:27:11 localhost podman[76113]: 2025-12-05 08:27:11.349807378 +0000 UTC m=+0.241588534 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Dec 5 03:27:11 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:27:12 localhost podman[76215]: 2025-12-05 08:27:12.70413459 +0000 UTC m=+0.064797917 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.expose-services=, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute) Dec 5 03:27:13 localhost podman[76215]: 2025-12-05 08:27:13.026683126 +0000 UTC m=+0.387346473 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 5 03:27:13 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:27:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:27:17 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:27:17 localhost recover_tripleo_nova_virtqemud[76309]: 61294 Dec 5 03:27:17 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:27:17 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:27:17 localhost podman[76287]: 2025-12-05 08:27:17.235774235 +0000 UTC m=+0.115283704 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3) Dec 5 03:27:17 localhost podman[76287]: 2025-12-05 08:27:17.277742638 +0000 UTC m=+0.157252117 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 5 03:27:17 localhost systemd[1]: tmp-crun.slehQ6.mount: Deactivated successfully. Dec 5 03:27:17 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:27:17 localhost podman[76289]: 2025-12-05 08:27:17.295128893 +0000 UTC m=+0.132271906 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z) Dec 5 03:27:17 localhost podman[76286]: 2025-12-05 08:27:17.338101167 +0000 UTC m=+0.218911115 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:27:17 localhost podman[76289]: 2025-12-05 08:27:17.342654347 +0000 UTC m=+0.179797390 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:27:17 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:27:17 localhost podman[76286]: 2025-12-05 08:27:17.378672047 +0000 UTC m=+0.259481995 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ovn_metadata_agent, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:27:17 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:27:17 localhost podman[76288]: 2025-12-05 08:27:17.417056409 +0000 UTC m=+0.254662987 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, architecture=x86_64, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:27:17 localhost podman[76288]: 2025-12-05 08:27:17.430529984 +0000 UTC m=+0.268136592 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container) Dec 5 03:27:17 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:27:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:27:25 localhost podman[76375]: 2025-12-05 08:27:25.201565651 +0000 UTC m=+0.088201577 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:27:25 localhost podman[76375]: 2025-12-05 08:27:25.405763562 +0000 UTC m=+0.292399438 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 5 03:27:25 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:27:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:27:40 localhost podman[76404]: 2025-12-05 08:27:40.19386682 +0000 UTC m=+0.081946015 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, container_name=nova_compute) Dec 5 03:27:40 localhost podman[76404]: 2025-12-05 08:27:40.279713684 +0000 UTC m=+0.167792829 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:27:40 localhost podman[76404]: unhealthy Dec 5 03:27:40 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:27:40 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 03:27:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:27:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:27:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:27:42 localhost podman[76427]: 2025-12-05 08:27:42.184885906 +0000 UTC m=+0.066448028 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, container_name=logrotate_crond, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:27:42 localhost podman[76426]: 2025-12-05 08:27:42.250331462 +0000 UTC m=+0.132683688 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 5 03:27:42 localhost podman[76426]: 2025-12-05 08:27:42.269500943 +0000 UTC m=+0.151853209 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.openshift.expose-services=, container_name=ceilometer_agent_compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:27:42 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:27:42 localhost podman[76427]: 2025-12-05 08:27:42.324031982 +0000 UTC m=+0.205594144 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, url=https://www.redhat.com, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 5 03:27:42 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:27:42 localhost podman[76428]: 2025-12-05 08:27:42.40899285 +0000 UTC m=+0.283436253 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4) Dec 5 03:27:42 localhost podman[76428]: 2025-12-05 08:27:42.460065773 +0000 UTC m=+0.334509206 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Dec 5 03:27:42 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:27:43 localhost podman[76499]: 2025-12-05 08:27:43.17316313 +0000 UTC m=+0.068356926 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target) Dec 5 03:27:43 localhost podman[76499]: 2025-12-05 08:27:43.567629073 +0000 UTC m=+0.462822789 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:27:43 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:27:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:27:48 localhost podman[76530]: 2025-12-05 08:27:48.211600497 +0000 UTC m=+0.085866847 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ovn-controller, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, build-date=2025-11-18T23:34:05Z) Dec 5 03:27:48 localhost podman[76522]: 2025-12-05 08:27:48.195415239 +0000 UTC m=+0.083301998 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 03:27:48 localhost podman[76523]: 2025-12-05 08:27:48.245572944 +0000 UTC m=+0.126381935 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:27:48 localhost podman[76524]: 2025-12-05 08:27:48.306502281 +0000 UTC m=+0.186305321 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, container_name=collectd, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-collectd) Dec 5 03:27:48 localhost podman[76524]: 2025-12-05 08:27:48.316821728 +0000 UTC m=+0.196624808 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, url=https://www.redhat.com) Dec 5 03:27:48 localhost podman[76522]: 2025-12-05 08:27:48.327997682 +0000 UTC m=+0.215884461 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:27:48 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:27:48 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:27:48 localhost podman[76530]: 2025-12-05 08:27:48.384894106 +0000 UTC m=+0.259160456 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:27:48 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:27:48 localhost podman[76523]: 2025-12-05 08:27:48.436588427 +0000 UTC m=+0.317397468 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 5 03:27:48 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:27:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:27:56 localhost podman[76605]: 2025-12-05 08:27:56.180088297 +0000 UTC m=+0.071099671 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:27:56 localhost podman[76605]: 2025-12-05 08:27:56.383632647 +0000 UTC m=+0.274644021 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12) Dec 5 03:27:56 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:28:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:28:11 localhost podman[76633]: 2025-12-05 08:28:11.19126427 +0000 UTC m=+0.078890931 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git) Dec 5 03:28:11 localhost podman[76633]: 2025-12-05 08:28:11.228041533 +0000 UTC m=+0.115668144 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:28:11 localhost podman[76633]: unhealthy Dec 5 03:28:11 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:28:11 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 03:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:28:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:28:13 localhost systemd[1]: tmp-crun.DwRmSJ.mount: Deactivated successfully. Dec 5 03:28:13 localhost podman[76658]: 2025-12-05 08:28:13.217747909 +0000 UTC m=+0.091164879 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=) Dec 5 03:28:13 localhost systemd[1]: tmp-crun.Ntev7a.mount: Deactivated successfully. Dec 5 03:28:13 localhost podman[76656]: 2025-12-05 08:28:13.271082892 +0000 UTC m=+0.147286238 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Dec 5 03:28:13 localhost podman[76658]: 2025-12-05 08:28:13.277677866 +0000 UTC m=+0.151094806 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z) Dec 5 03:28:13 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:28:13 localhost podman[76657]: 2025-12-05 08:28:13.325406526 +0000 UTC m=+0.198538138 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 5 03:28:13 localhost podman[76657]: 2025-12-05 08:28:13.339771648 +0000 UTC m=+0.212903310 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:28:13 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:28:13 localhost podman[76656]: 2025-12-05 08:28:13.353521202 +0000 UTC m=+0.229724608 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container) Dec 5 03:28:13 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:28:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:28:14 localhost podman[76742]: 2025-12-05 08:28:14.200229266 +0000 UTC m=+0.086230918 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container) Dec 5 03:28:14 localhost podman[76742]: 2025-12-05 08:28:14.599145045 +0000 UTC m=+0.485146747 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:28:14 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:28:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:28:19 localhost podman[76830]: 2025-12-05 08:28:19.207355078 +0000 UTC m=+0.084658629 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:28:19 localhost podman[76830]: 2025-12-05 08:28:19.243680997 +0000 UTC m=+0.120984508 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:28:19 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:28:19 localhost podman[76829]: 2025-12-05 08:28:19.257591075 +0000 UTC m=+0.138538498 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:28:19 localhost podman[76829]: 2025-12-05 08:28:19.266605333 +0000 UTC m=+0.147552766 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.) Dec 5 03:28:19 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:28:19 localhost podman[76828]: 2025-12-05 08:28:19.32266826 +0000 UTC m=+0.201598512 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Dec 5 03:28:19 localhost podman[76828]: 2025-12-05 08:28:19.372032151 +0000 UTC m=+0.250962393 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1) Dec 5 03:28:19 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:28:19 localhost podman[76831]: 2025-12-05 08:28:19.379415579 +0000 UTC m=+0.250874210 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com) Dec 5 03:28:19 localhost podman[76831]: 2025-12-05 08:28:19.45965257 +0000 UTC m=+0.331111241 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true) Dec 5 03:28:19 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:28:20 localhost systemd[1]: tmp-crun.j4m0kz.mount: Deactivated successfully. Dec 5 03:28:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:28:27 localhost podman[76917]: 2025-12-05 08:28:27.203904126 +0000 UTC m=+0.089680583 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, url=https://www.redhat.com, container_name=metrics_qdr, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:28:27 localhost podman[76917]: 2025-12-05 08:28:27.400576665 +0000 UTC m=+0.286353132 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, version=17.1.12, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:28:27 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:28:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:28:42 localhost podman[77034]: 2025-12-05 08:28:42.206063971 +0000 UTC m=+0.091835810 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:28:42 localhost podman[77034]: 2025-12-05 08:28:42.244467534 +0000 UTC m=+0.130239363 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=nova_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step5) Dec 5 03:28:42 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:28:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:28:44 localhost systemd[1]: tmp-crun.OtdrNx.mount: Deactivated successfully. Dec 5 03:28:44 localhost podman[77063]: 2025-12-05 08:28:44.207849929 +0000 UTC m=+0.085998600 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12) Dec 5 03:28:44 localhost podman[77061]: 2025-12-05 08:28:44.269102455 +0000 UTC m=+0.148700791 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git) Dec 5 03:28:44 localhost podman[77062]: 2025-12-05 08:28:44.311620306 +0000 UTC m=+0.190973195 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Dec 5 03:28:44 localhost podman[77062]: 2025-12-05 08:28:44.323798381 +0000 UTC m=+0.203151370 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4) Dec 5 03:28:44 localhost podman[77061]: 2025-12-05 08:28:44.330835587 +0000 UTC m=+0.210433883 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:28:44 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:28:44 localhost podman[77063]: 2025-12-05 08:28:44.343025753 +0000 UTC m=+0.221174444 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi) Dec 5 03:28:44 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:28:44 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:28:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:28:45 localhost podman[77134]: 2025-12-05 08:28:45.195640179 +0000 UTC m=+0.080578353 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4) Dec 5 03:28:45 localhost podman[77134]: 2025-12-05 08:28:45.603681 +0000 UTC m=+0.488619164 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=) Dec 5 03:28:45 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:28:48 localhost systemd[1]: libpod-5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4.scope: Deactivated successfully. Dec 5 03:28:49 localhost podman[77158]: 2025-12-05 08:28:49.038290288 +0000 UTC m=+0.058540665 container died 5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_wait_for_compute_service, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:28:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4-userdata-shm.mount: Deactivated successfully. Dec 5 03:28:49 localhost systemd[1]: var-lib-containers-storage-overlay-e3938ea501b837fe2e2a8057ca94c3f96e13f9f2be2e34549b5d3eefb9b400ac-merged.mount: Deactivated successfully. Dec 5 03:28:49 localhost podman[77158]: 2025-12-05 08:28:49.075327619 +0000 UTC m=+0.095577956 container cleanup 5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vendor=Red Hat, Inc., release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 5 03:28:49 localhost systemd[1]: libpod-conmon-5bfc372d967f524466f983dc2a1ff38839499b0ab5804a266b65381cac9d30e4.scope: Deactivated successfully. Dec 5 03:28:49 localhost python3[75165]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=f3fe7c52055154c7f97b988e301af0d7 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 5 03:28:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:28:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:28:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:28:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:28:49 localhost podman[77214]: 2025-12-05 08:28:49.635039462 +0000 UTC m=+0.089489548 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, architecture=x86_64) Dec 5 03:28:49 localhost podman[77215]: 2025-12-05 08:28:49.661067063 +0000 UTC m=+0.112046573 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:28:49 localhost python3[77213]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:28:49 localhost podman[77216]: 2025-12-05 08:28:49.705824892 +0000 UTC m=+0.152810058 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3) Dec 5 03:28:49 localhost podman[77214]: 2025-12-05 08:28:49.719597646 +0000 UTC m=+0.174047702 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:28:49 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:28:49 localhost podman[77217]: 2025-12-05 08:28:49.767241414 +0000 UTC m=+0.208320968 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 03:28:49 localhost podman[77216]: 2025-12-05 08:28:49.79500944 +0000 UTC m=+0.241994636 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:28:49 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:28:49 localhost podman[77217]: 2025-12-05 08:28:49.822202318 +0000 UTC m=+0.263281892 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:28:49 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:28:49 localhost podman[77215]: 2025-12-05 08:28:49.845785284 +0000 UTC m=+0.296764794 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:28:49 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:28:49 localhost python3[77312]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 5 03:28:50 localhost python3[77376]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764923330.0552087-118474-61513213086885/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:28:51 localhost python3[77392]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 03:28:51 localhost systemd[1]: Reloading. Dec 5 03:28:51 localhost systemd-rc-local-generator[77415]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:28:51 localhost systemd-sysv-generator[77418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:28:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:28:51 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:28:51 localhost recover_tripleo_nova_virtqemud[77430]: 61294 Dec 5 03:28:51 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:28:51 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:28:52 localhost python3[77446]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 03:28:53 localhost systemd[1]: Reloading. Dec 5 03:28:53 localhost systemd-rc-local-generator[77476]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:28:53 localhost systemd-sysv-generator[77479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:28:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:28:54 localhost systemd[1]: Starting nova_compute container... Dec 5 03:28:54 localhost tripleo-start-podman-container[77486]: Creating additional drop-in dependency for "nova_compute" (34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c) Dec 5 03:28:54 localhost systemd[1]: Reloading. Dec 5 03:28:54 localhost systemd-sysv-generator[77547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 03:28:54 localhost systemd-rc-local-generator[77544]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 03:28:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 03:28:54 localhost systemd[1]: Started nova_compute container. Dec 5 03:28:55 localhost python3[77583]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:28:56 localhost python3[77704]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005546419 step=5 update_config_hash_only=False Dec 5 03:28:57 localhost python3[77720]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 03:28:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:28:57 localhost podman[77737]: 2025-12-05 08:28:57.600380246 +0000 UTC m=+0.085862465 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:28:57 localhost python3[77736]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 5 03:28:57 localhost podman[77737]: 2025-12-05 08:28:57.834789228 +0000 UTC m=+0.320271457 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible) Dec 5 03:28:57 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:29:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:29:13 localhost systemd[1]: tmp-crun.foFDTz.mount: Deactivated successfully. Dec 5 03:29:13 localhost podman[77766]: 2025-12-05 08:29:13.209006565 +0000 UTC m=+0.092009236 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=) Dec 5 03:29:13 localhost podman[77766]: 2025-12-05 08:29:13.267559209 +0000 UTC m=+0.150561950 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute) Dec 5 03:29:13 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:29:15 localhost podman[77792]: 2025-12-05 08:29:15.204411475 +0000 UTC m=+0.093982346 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:29:15 localhost podman[77792]: 2025-12-05 08:29:15.237772303 +0000 UTC m=+0.127343154 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 5 03:29:15 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:29:15 localhost podman[77793]: 2025-12-05 08:29:15.304597522 +0000 UTC m=+0.188851229 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 5 03:29:15 localhost podman[77793]: 2025-12-05 08:29:15.319752918 +0000 UTC m=+0.204006615 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond) Dec 5 03:29:15 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:29:15 localhost podman[77794]: 2025-12-05 08:29:15.406241793 +0000 UTC m=+0.287436026 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:29:15 localhost podman[77794]: 2025-12-05 08:29:15.43861397 +0000 UTC m=+0.319808133 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:29:15 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:29:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:29:15 localhost podman[77877]: 2025-12-05 08:29:15.864414968 +0000 UTC m=+0.082488123 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 5 03:29:16 localhost podman[77877]: 2025-12-05 08:29:16.251796771 +0000 UTC m=+0.469869866 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, tcib_managed=true) Dec 5 03:29:16 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:29:17 localhost sshd[77964]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:29:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:29:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:29:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:29:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:29:20 localhost podman[77966]: 2025-12-05 08:29:20.22717056 +0000 UTC m=+0.106186403 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:29:20 localhost podman[77966]: 2025-12-05 08:29:20.272826496 +0000 UTC m=+0.151842329 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:29:20 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:29:20 localhost systemd[1]: tmp-crun.Vwnlxq.mount: Deactivated successfully. Dec 5 03:29:20 localhost podman[77967]: 2025-12-05 08:29:20.36096012 +0000 UTC m=+0.238940351 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, version=17.1.12) Dec 5 03:29:20 localhost podman[77967]: 2025-12-05 08:29:20.37215883 +0000 UTC m=+0.250139081 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=iscsid, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc.) Dec 5 03:29:20 localhost podman[77969]: 2025-12-05 08:29:20.330791574 +0000 UTC m=+0.202415673 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Dec 5 03:29:20 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:29:20 localhost podman[77968]: 2025-12-05 08:29:20.418535327 +0000 UTC m=+0.296303342 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=collectd, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 5 03:29:20 localhost podman[77968]: 2025-12-05 08:29:20.432739938 +0000 UTC m=+0.310507953 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:29:20 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:29:20 localhost podman[77969]: 2025-12-05 08:29:20.514705585 +0000 UTC m=+0.386329714 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, architecture=x86_64, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:29:20 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:29:22 localhost sshd[78051]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:29:22 localhost systemd-logind[760]: New session 33 of user zuul. Dec 5 03:29:22 localhost systemd[1]: Started Session 33 of User zuul. Dec 5 03:29:23 localhost python3[78160]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 03:29:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:29:28 localhost systemd[1]: tmp-crun.06lFsc.mount: Deactivated successfully. Dec 5 03:29:28 localhost podman[78347]: 2025-12-05 08:29:28.255238952 +0000 UTC m=+0.136437091 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-type=git, container_name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:29:28 localhost podman[78347]: 2025-12-05 08:29:28.46210138 +0000 UTC m=+0.343299439 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:29:28 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:29:31 localhost python3[78450]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Dec 5 03:29:38 localhost python3[78543]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Dec 5 03:29:38 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Dec 5 03:29:38 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Dec 5 03:29:38 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 03:29:38 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 03:29:38 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 03:29:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:29:44 localhost systemd[1]: tmp-crun.GD57zK.mount: Deactivated successfully. Dec 5 03:29:44 localhost podman[78612]: 2025-12-05 08:29:44.218915753 +0000 UTC m=+0.103433409 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:29:44 localhost podman[78612]: 2025-12-05 08:29:44.284949217 +0000 UTC m=+0.169466893 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_compute, vcs-type=git, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12) Dec 5 03:29:44 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:29:46 localhost podman[78639]: 2025-12-05 08:29:46.207535225 +0000 UTC m=+0.094210019 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Dec 5 03:29:46 localhost podman[78640]: 2025-12-05 08:29:46.258497461 +0000 UTC m=+0.141281597 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron) Dec 5 03:29:46 localhost podman[78639]: 2025-12-05 08:29:46.265657709 +0000 UTC m=+0.152332553 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Dec 5 03:29:46 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:29:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:29:46 localhost podman[78640]: 2025-12-05 08:29:46.295693451 +0000 UTC m=+0.178477547 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, architecture=x86_64, tcib_managed=true) Dec 5 03:29:46 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:29:46 localhost podman[78641]: 2025-12-05 08:29:46.362196929 +0000 UTC m=+0.243426918 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 5 03:29:46 localhost podman[78641]: 2025-12-05 08:29:46.417794395 +0000 UTC m=+0.299024384 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:29:46 localhost podman[78693]: 2025-12-05 08:29:46.431618665 +0000 UTC m=+0.133219203 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Dec 5 03:29:46 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:29:46 localhost podman[78693]: 2025-12-05 08:29:46.825710953 +0000 UTC m=+0.527311491 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:29:46 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:29:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:29:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:29:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:29:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:29:51 localhost systemd[1]: tmp-crun.QbXsB0.mount: Deactivated successfully. Dec 5 03:29:51 localhost podman[78737]: 2025-12-05 08:29:51.272519536 +0000 UTC m=+0.146407524 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, release=1761123044, version=17.1.12, vcs-type=git, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd) Dec 5 03:29:51 localhost podman[78737]: 2025-12-05 08:29:51.280960752 +0000 UTC m=+0.154848780 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:29:51 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:29:51 localhost podman[78731]: 2025-12-05 08:29:51.250503307 +0000 UTC m=+0.130643264 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:29:51 localhost podman[78738]: 2025-12-05 08:29:51.224029195 +0000 UTC m=+0.093995554 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:29:51 localhost podman[78731]: 2025-12-05 08:29:51.333555968 +0000 UTC m=+0.213695895 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., container_name=iscsid, tcib_managed=true) Dec 5 03:29:51 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:29:51 localhost podman[78730]: 2025-12-05 08:29:51.202343836 +0000 UTC m=+0.089606450 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:29:51 localhost podman[78730]: 2025-12-05 08:29:51.391752734 +0000 UTC m=+0.279015398 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64) Dec 5 03:29:51 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:29:51 localhost podman[78738]: 2025-12-05 08:29:51.410512704 +0000 UTC m=+0.280479063 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:29:51 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:29:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:29:59 localhost systemd[1]: tmp-crun.Yb5Bgz.mount: Deactivated successfully. Dec 5 03:29:59 localhost podman[78816]: 2025-12-05 08:29:59.208510594 +0000 UTC m=+0.096028335 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12) Dec 5 03:29:59 localhost podman[78816]: 2025-12-05 08:29:59.441879455 +0000 UTC m=+0.329397196 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:29:59 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:30:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:30:09 localhost recover_tripleo_nova_virtqemud[78847]: 61294 Dec 5 03:30:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:30:09 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:30:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:30:15 localhost podman[78848]: 2025-12-05 08:30:15.189522732 +0000 UTC m=+0.076363458 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:30:15 localhost podman[78848]: 2025-12-05 08:30:15.244903883 +0000 UTC m=+0.131744559 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, container_name=nova_compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:30:15 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:30:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:30:17 localhost podman[78877]: 2025-12-05 08:30:17.207759433 +0000 UTC m=+0.083344189 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:30:17 localhost systemd[1]: tmp-crun.4YLQ2x.mount: Deactivated successfully. Dec 5 03:30:17 localhost podman[78875]: 2025-12-05 08:30:17.265354211 +0000 UTC m=+0.147545488 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 03:30:17 localhost podman[78875]: 2025-12-05 08:30:17.276594512 +0000 UTC m=+0.158785729 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=logrotate_crond, name=rhosp17/openstack-cron) Dec 5 03:30:17 localhost podman[78877]: 2025-12-05 08:30:17.287649818 +0000 UTC m=+0.163234634 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true) Dec 5 03:30:17 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:30:17 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:30:17 localhost podman[78874]: 2025-12-05 08:30:17.380447293 +0000 UTC m=+0.263780335 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Dec 5 03:30:17 localhost podman[78874]: 2025-12-05 08:30:17.410760263 +0000 UTC m=+0.294093355 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, tcib_managed=true) Dec 5 03:30:17 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:30:17 localhost podman[78876]: 2025-12-05 08:30:17.415708264 +0000 UTC m=+0.293011633 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:30:17 localhost podman[78876]: 2025-12-05 08:30:17.794734364 +0000 UTC m=+0.672037843 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:30:17 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:30:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:30:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:30:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:30:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:30:22 localhost podman[79049]: 2025-12-05 08:30:22.209512125 +0000 UTC m=+0.080701300 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller) Dec 5 03:30:22 localhost systemd[1]: tmp-crun.ivfHpS.mount: Deactivated successfully. Dec 5 03:30:22 localhost podman[79048]: 2025-12-05 08:30:22.262580305 +0000 UTC m=+0.136274896 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Dec 5 03:30:22 localhost podman[79048]: 2025-12-05 08:30:22.274639931 +0000 UTC m=+0.148334552 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:30:22 localhost podman[79049]: 2025-12-05 08:30:22.285370377 +0000 UTC m=+0.156559612 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller) Dec 5 03:30:22 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:30:22 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:30:22 localhost podman[79046]: 2025-12-05 08:30:22.363658712 +0000 UTC m=+0.240606052 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 5 03:30:22 localhost podman[79047]: 2025-12-05 08:30:22.41271073 +0000 UTC m=+0.287474553 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 5 03:30:22 localhost podman[79046]: 2025-12-05 08:30:22.418592809 +0000 UTC m=+0.295540159 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:30:22 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:30:22 localhost podman[79047]: 2025-12-05 08:30:22.447822436 +0000 UTC m=+0.322586299 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 5 03:30:22 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:30:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:30:30 localhost podman[79126]: 2025-12-05 08:30:30.191378246 +0000 UTC m=+0.083692530 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Dec 5 03:30:30 localhost podman[79126]: 2025-12-05 08:30:30.416847237 +0000 UTC m=+0.309161511 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:30:30 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:30:37 localhost systemd[1]: session-33.scope: Deactivated successfully. Dec 5 03:30:37 localhost systemd[1]: session-33.scope: Consumed 6.315s CPU time. Dec 5 03:30:37 localhost systemd-logind[760]: Session 33 logged out. Waiting for processes to exit. Dec 5 03:30:37 localhost systemd-logind[760]: Removed session 33. Dec 5 03:30:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:30:46 localhost podman[79200]: 2025-12-05 08:30:46.179834107 +0000 UTC m=+0.068043246 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:30:46 localhost podman[79200]: 2025-12-05 08:30:46.271856869 +0000 UTC m=+0.160065958 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 5 03:30:46 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:30:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:30:48 localhost podman[79226]: 2025-12-05 08:30:48.204828753 +0000 UTC m=+0.087049272 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:30:48 localhost podman[79226]: 2025-12-05 08:30:48.254882072 +0000 UTC m=+0.137102571 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:11:48Z, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public) Dec 5 03:30:48 localhost systemd[1]: tmp-crun.30gd2J.mount: Deactivated successfully. Dec 5 03:30:48 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:30:48 localhost podman[79234]: 2025-12-05 08:30:48.262214455 +0000 UTC m=+0.133400279 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4) Dec 5 03:30:48 localhost podman[79227]: 2025-12-05 08:30:48.312702027 +0000 UTC m=+0.191677938 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:30:48 localhost podman[79227]: 2025-12-05 08:30:48.397432598 +0000 UTC m=+0.276408599 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64) Dec 5 03:30:48 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:30:48 localhost podman[79228]: 2025-12-05 08:30:48.368642494 +0000 UTC m=+0.240741726 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:30:48 localhost podman[79234]: 2025-12-05 08:30:48.449724265 +0000 UTC m=+0.320910139 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 5 03:30:48 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:30:48 localhost podman[79228]: 2025-12-05 08:30:48.759618507 +0000 UTC m=+0.631717729 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com) Dec 5 03:30:48 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:30:49 localhost systemd[1]: tmp-crun.sudT1I.mount: Deactivated successfully. Dec 5 03:30:49 localhost sshd[79321]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:30:49 localhost systemd-logind[760]: New session 34 of user zuul. Dec 5 03:30:50 localhost systemd[1]: Started Session 34 of User zuul. Dec 5 03:30:50 localhost python3[79340]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:30:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:30:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:30:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:30:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:30:53 localhost podman[79343]: 2025-12-05 08:30:53.203600886 +0000 UTC m=+0.082994220 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 5 03:30:53 localhost podman[79343]: 2025-12-05 08:30:53.216402194 +0000 UTC m=+0.095795548 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, tcib_managed=true, url=https://www.redhat.com) Dec 5 03:30:53 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:30:53 localhost podman[79345]: 2025-12-05 08:30:53.259428439 +0000 UTC m=+0.135318037 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 5 03:30:53 localhost podman[79345]: 2025-12-05 08:30:53.281754527 +0000 UTC m=+0.157644175 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 5 03:30:53 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:30:53 localhost podman[79344]: 2025-12-05 08:30:53.296183985 +0000 UTC m=+0.175489406 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, distribution-scope=public) Dec 5 03:30:53 localhost podman[79342]: 2025-12-05 08:30:53.35139359 +0000 UTC m=+0.229556547 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 03:30:53 localhost podman[79344]: 2025-12-05 08:30:53.382620827 +0000 UTC m=+0.261926248 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, release=1761123044, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:30:53 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:30:53 localhost podman[79342]: 2025-12-05 08:30:53.420709453 +0000 UTC m=+0.298872430 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:30:53 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:31:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:31:01 localhost sshd[79433]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:31:01 localhost systemd[1]: tmp-crun.9Xj3HH.mount: Deactivated successfully. Dec 5 03:31:01 localhost podman[79425]: 2025-12-05 08:31:01.210411853 +0000 UTC m=+0.102312816 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:31:01 localhost podman[79425]: 2025-12-05 08:31:01.417609861 +0000 UTC m=+0.309510854 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git) Dec 5 03:31:01 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:31:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:31:16 localhost systemd[1]: tmp-crun.d44MKx.mount: Deactivated successfully. Dec 5 03:31:16 localhost podman[79471]: 2025-12-05 08:31:16.382154835 +0000 UTC m=+0.061000341 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 5 03:31:16 localhost podman[79471]: 2025-12-05 08:31:16.470881957 +0000 UTC m=+0.149727423 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:31:16 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:31:16 localhost python3[79472]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 5 03:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:31:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:31:18 localhost systemd[1]: tmp-crun.OEb1hw.mount: Deactivated successfully. Dec 5 03:31:18 localhost podman[79512]: 2025-12-05 08:31:18.969618879 +0000 UTC m=+0.124380695 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12) Dec 5 03:31:18 localhost podman[79516]: 2025-12-05 08:31:18.939358351 +0000 UTC m=+0.092726535 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4) Dec 5 03:31:19 localhost podman[79514]: 2025-12-05 08:31:19.02304456 +0000 UTC m=+0.178877069 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z) Dec 5 03:31:19 localhost podman[79512]: 2025-12-05 08:31:19.042889942 +0000 UTC m=+0.197651748 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 03:31:19 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:31:19 localhost podman[79514]: 2025-12-05 08:31:19.057374722 +0000 UTC m=+0.213207261 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-type=git, release=1761123044, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:31:19 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:31:19 localhost podman[79516]: 2025-12-05 08:31:19.073923544 +0000 UTC m=+0.227291768 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:31:19 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:31:19 localhost podman[79515]: 2025-12-05 08:31:19.137535814 +0000 UTC m=+0.290182656 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target) Dec 5 03:31:19 localhost podman[79515]: 2025-12-05 08:31:19.540819482 +0000 UTC m=+0.693466264 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 5 03:31:19 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:31:20 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 03:31:20 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 03:31:20 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 03:31:20 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 03:31:20 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 03:31:20 localhost systemd[1]: run-rb81f745743e445d9b6101a9d6d1ed5b2.service: Deactivated successfully. Dec 5 03:31:20 localhost systemd[1]: run-r8e0624b32bf24d3a8e620fb738f22f3d.service: Deactivated successfully. Dec 5 03:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:31:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:31:24 localhost podman[79822]: 2025-12-05 08:31:24.197642938 +0000 UTC m=+0.071875202 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:31:24 localhost podman[79822]: 2025-12-05 08:31:24.206716244 +0000 UTC m=+0.080948588 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git) Dec 5 03:31:24 localhost systemd[1]: tmp-crun.E7uDR1.mount: Deactivated successfully. Dec 5 03:31:24 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:31:24 localhost podman[79824]: 2025-12-05 08:31:24.270531399 +0000 UTC m=+0.135435480 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, tcib_managed=true, distribution-scope=public) Dec 5 03:31:24 localhost podman[79823]: 2025-12-05 08:31:24.223768261 +0000 UTC m=+0.090377134 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 5 03:31:24 localhost podman[79821]: 2025-12-05 08:31:24.247294805 +0000 UTC m=+0.122919111 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:31:24 localhost podman[79823]: 2025-12-05 08:31:24.307635496 +0000 UTC m=+0.174244369 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:31:24 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:31:24 localhost podman[79821]: 2025-12-05 08:31:24.331747767 +0000 UTC m=+0.207372083 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:31:24 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:31:24 localhost podman[79824]: 2025-12-05 08:31:24.361951914 +0000 UTC m=+0.226856035 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:31:24 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:31:31 localhost sshd[79905]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:31:32 localhost podman[79907]: 2025-12-05 08:31:32.205328263 +0000 UTC m=+0.082005989 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:31:32 localhost podman[79907]: 2025-12-05 08:31:32.407637011 +0000 UTC m=+0.284314817 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:31:32 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:31:33 localhost sshd[79936]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:31:35 localhost sshd[79938]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:31:37 localhost sshd[79940]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:31:40 localhost sshd[79962]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:31:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:31:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:31:47 localhost recover_tripleo_nova_virtqemud[79996]: 61294 Dec 5 03:31:47 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:31:47 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:31:47 localhost podman[79989]: 2025-12-05 08:31:47.204842878 +0000 UTC m=+0.091149846 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, version=17.1.12, distribution-scope=public) Dec 5 03:31:47 localhost podman[79989]: 2025-12-05 08:31:47.26058216 +0000 UTC m=+0.146889038 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, release=1761123044, container_name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:31:47 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:31:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:31:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:31:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:31:49 localhost systemd[1]: tmp-crun.dw0O0l.mount: Deactivated successfully. Dec 5 03:31:49 localhost podman[80018]: 2025-12-05 08:31:49.211288641 +0000 UTC m=+0.090907099 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Dec 5 03:31:49 localhost podman[80018]: 2025-12-05 08:31:49.241757296 +0000 UTC m=+0.121375654 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12) Dec 5 03:31:49 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:31:49 localhost podman[80019]: 2025-12-05 08:31:49.248885693 +0000 UTC m=+0.128908233 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:32Z) Dec 5 03:31:49 localhost podman[80019]: 2025-12-05 08:31:49.338555834 +0000 UTC m=+0.218578384 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:31:49 localhost podman[80020]: 2025-12-05 08:31:49.346312219 +0000 UTC m=+0.217597624 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:31:49 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:31:49 localhost podman[80020]: 2025-12-05 08:31:49.368469071 +0000 UTC m=+0.239754466 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:31:49 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:31:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:31:50 localhost podman[80090]: 2025-12-05 08:31:50.19660699 +0000 UTC m=+0.085022481 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container) Dec 5 03:31:50 localhost systemd[1]: tmp-crun.polWZL.mount: Deactivated successfully. Dec 5 03:31:50 localhost podman[80090]: 2025-12-05 08:31:50.609820039 +0000 UTC m=+0.498235560 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 5 03:31:50 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:31:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:31:55 localhost systemd[1]: tmp-crun.ke5G4G.mount: Deactivated successfully. Dec 5 03:31:55 localhost podman[80114]: 2025-12-05 08:31:55.200505618 +0000 UTC m=+0.087762724 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1) Dec 5 03:31:55 localhost podman[80115]: 2025-12-05 08:31:55.252621239 +0000 UTC m=+0.132494732 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=) Dec 5 03:31:55 localhost podman[80122]: 2025-12-05 08:31:55.227529878 +0000 UTC m=+0.099131379 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=) Dec 5 03:31:55 localhost podman[80114]: 2025-12-05 08:31:55.285825397 +0000 UTC m=+0.173082493 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:31:55 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:31:55 localhost podman[80122]: 2025-12-05 08:31:55.357065899 +0000 UTC m=+0.228667400 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:31:55 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:31:55 localhost podman[80115]: 2025-12-05 08:31:55.385972075 +0000 UTC m=+0.265845617 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, vcs-type=git, name=rhosp17/openstack-iscsid) Dec 5 03:31:55 localhost podman[80116]: 2025-12-05 08:31:55.360502973 +0000 UTC m=+0.234596359 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:31:55 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:31:55 localhost podman[80116]: 2025-12-05 08:31:55.439963074 +0000 UTC m=+0.314056430 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-type=git) Dec 5 03:31:55 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:31:58 localhost python3[80211]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:32:02 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 03:32:02 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:32:03 localhost podman[80339]: 2025-12-05 08:32:03.198870599 +0000 UTC m=+0.083029850 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 5 03:32:03 localhost podman[80339]: 2025-12-05 08:32:03.384762431 +0000 UTC m=+0.268921642 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:32:03 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:32:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:32:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4466 writes, 20K keys, 4466 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4466 writes, 463 syncs, 9.65 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:32:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:32:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 5166 writes, 22K keys, 5166 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5166 writes, 594 syncs, 8.70 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:32:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:32:18 localhost podman[80430]: 2025-12-05 08:32:18.202479788 +0000 UTC m=+0.080085720 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:32:18 localhost podman[80430]: 2025-12-05 08:32:18.239717528 +0000 UTC m=+0.117323470 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:32:18 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:32:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:32:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:32:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:32:20 localhost podman[80456]: 2025-12-05 08:32:20.182361696 +0000 UTC m=+0.076836782 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, tcib_managed=true, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container) Dec 5 03:32:20 localhost podman[80457]: 2025-12-05 08:32:20.197435543 +0000 UTC m=+0.082851425 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, release=1761123044) Dec 5 03:32:20 localhost podman[80457]: 2025-12-05 08:32:20.208441087 +0000 UTC m=+0.093856939 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond) Dec 5 03:32:20 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:32:20 localhost podman[80456]: 2025-12-05 08:32:20.240745088 +0000 UTC m=+0.135220194 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute) Dec 5 03:32:20 localhost podman[80458]: 2025-12-05 08:32:20.255297539 +0000 UTC m=+0.139192175 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4) Dec 5 03:32:20 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:32:20 localhost podman[80458]: 2025-12-05 08:32:20.299743618 +0000 UTC m=+0.183638254 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:32:20 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:32:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:32:21 localhost systemd[1]: tmp-crun.1Z0Ok1.mount: Deactivated successfully. Dec 5 03:32:21 localhost podman[80525]: 2025-12-05 08:32:21.180110221 +0000 UTC m=+0.072177011 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:32:21 localhost podman[80525]: 2025-12-05 08:32:21.519756467 +0000 UTC m=+0.411823227 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:32:21 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:32:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:32:26 localhost podman[80629]: 2025-12-05 08:32:26.19820526 +0000 UTC m=+0.081747332 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Dec 5 03:32:26 localhost podman[80629]: 2025-12-05 08:32:26.208345998 +0000 UTC m=+0.091888120 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:32:26 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:32:26 localhost podman[80628]: 2025-12-05 08:32:26.255196509 +0000 UTC m=+0.138217726 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=iscsid, name=rhosp17/openstack-iscsid, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc.) Dec 5 03:32:26 localhost podman[80627]: 2025-12-05 08:32:26.312190059 +0000 UTC m=+0.198182695 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, release=1761123044, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 03:32:26 localhost podman[80628]: 2025-12-05 08:32:26.338444215 +0000 UTC m=+0.221465441 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, container_name=iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:32:26 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:32:26 localhost podman[80627]: 2025-12-05 08:32:26.352648577 +0000 UTC m=+0.238641253 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12) Dec 5 03:32:26 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:32:26 localhost podman[80630]: 2025-12-05 08:32:26.28947926 +0000 UTC m=+0.169192965 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc.) Dec 5 03:32:26 localhost podman[80630]: 2025-12-05 08:32:26.420706372 +0000 UTC m=+0.300420077 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 5 03:32:26 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:32:34 localhost podman[80710]: 2025-12-05 08:32:34.201297005 +0000 UTC m=+0.087147295 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Dec 5 03:32:34 localhost podman[80710]: 2025-12-05 08:32:34.388080583 +0000 UTC m=+0.273930753 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:32:34 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:32:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:32:49 localhost podman[80783]: 2025-12-05 08:32:49.20845707 +0000 UTC m=+0.091883609 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 5 03:32:49 localhost podman[80783]: 2025-12-05 08:32:49.242507463 +0000 UTC m=+0.125933982 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container) Dec 5 03:32:49 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:32:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:32:51 localhost podman[80807]: 2025-12-05 08:32:51.206949903 +0000 UTC m=+0.093217800 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, vcs-type=git, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Dec 5 03:32:51 localhost podman[80807]: 2025-12-05 08:32:51.237954394 +0000 UTC m=+0.124222331 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:32:51 localhost systemd[1]: tmp-crun.CNmwoN.mount: Deactivated successfully. Dec 5 03:32:51 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:32:51 localhost podman[80808]: 2025-12-05 08:32:51.263753966 +0000 UTC m=+0.148792306 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:32:51 localhost podman[80808]: 2025-12-05 08:32:51.272486001 +0000 UTC m=+0.157524361 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:32:51 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:32:51 localhost podman[80809]: 2025-12-05 08:32:51.367923537 +0000 UTC m=+0.247031337 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 03:32:51 localhost podman[80809]: 2025-12-05 08:32:51.422136152 +0000 UTC m=+0.301243922 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi) Dec 5 03:32:51 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:32:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:32:52 localhost podman[80875]: 2025-12-05 08:32:52.17938575 +0000 UTC m=+0.064066785 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:32:52 localhost podman[80875]: 2025-12-05 08:32:52.548799079 +0000 UTC m=+0.433480124 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 5 03:32:52 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:32:55 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:32:55 localhost recover_tripleo_nova_virtqemud[80915]: 61294 Dec 5 03:32:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:32:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:32:55 localhost python3[80913]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 03:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:32:57 localhost podman[80920]: 2025-12-05 08:32:57.185811115 +0000 UTC m=+0.073798840 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3) Dec 5 03:32:57 localhost podman[80920]: 2025-12-05 08:32:57.192124526 +0000 UTC m=+0.080112271 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:32:57 localhost systemd[1]: tmp-crun.eHzYQI.mount: Deactivated successfully. Dec 5 03:32:57 localhost podman[80922]: 2025-12-05 08:32:57.205357698 +0000 UTC m=+0.085953920 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_id=tripleo_step4) Dec 5 03:32:57 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:32:57 localhost podman[80921]: 2025-12-05 08:32:57.237921715 +0000 UTC m=+0.119589249 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=collectd, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:32:57 localhost podman[80919]: 2025-12-05 08:32:57.292374598 +0000 UTC m=+0.180414365 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64) Dec 5 03:32:57 localhost podman[80921]: 2025-12-05 08:32:57.320548313 +0000 UTC m=+0.202215807 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:32:57 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:32:57 localhost podman[80922]: 2025-12-05 08:32:57.337729024 +0000 UTC m=+0.218325236 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 5 03:32:57 localhost podman[80919]: 2025-12-05 08:32:57.348317596 +0000 UTC m=+0.236357453 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:32:57 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:32:57 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:32:58 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 03:32:58 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:33:05 localhost podman[81191]: 2025-12-05 08:33:05.198206552 +0000 UTC m=+0.084018771 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:33:05 localhost podman[81191]: 2025-12-05 08:33:05.395001243 +0000 UTC m=+0.280813512 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:33:05 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:33:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:33:20 localhost podman[81218]: 2025-12-05 08:33:20.194978504 +0000 UTC m=+0.082350729 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 5 03:33:20 localhost podman[81218]: 2025-12-05 08:33:20.225236523 +0000 UTC m=+0.112608708 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z) Dec 5 03:33:20 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:33:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:33:22 localhost systemd[1]: tmp-crun.iqEYco.mount: Deactivated successfully. Dec 5 03:33:22 localhost podman[81244]: 2025-12-05 08:33:22.207478282 +0000 UTC m=+0.093923431 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044) Dec 5 03:33:22 localhost podman[81244]: 2025-12-05 08:33:22.236133342 +0000 UTC m=+0.122578531 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:33:22 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:33:22 localhost podman[81245]: 2025-12-05 08:33:22.243720922 +0000 UTC m=+0.127570382 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Dec 5 03:33:22 localhost podman[81246]: 2025-12-05 08:33:22.298941897 +0000 UTC m=+0.178753104 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, release=1761123044, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 5 03:33:22 localhost podman[81246]: 2025-12-05 08:33:22.3267272 +0000 UTC m=+0.206538457 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 03:33:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:33:22 localhost podman[81245]: 2025-12-05 08:33:22.377067978 +0000 UTC m=+0.260917438 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 5 03:33:22 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:33:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:33:23 localhost podman[81313]: 2025-12-05 08:33:23.153405225 +0000 UTC m=+0.049824022 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.4) Dec 5 03:33:23 localhost podman[81313]: 2025-12-05 08:33:23.498814666 +0000 UTC m=+0.395233503 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 5 03:33:23 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:33:28 localhost systemd[1]: tmp-crun.1Sm2NS.mount: Deactivated successfully. Dec 5 03:33:28 localhost podman[81465]: 2025-12-05 08:33:28.196493551 +0000 UTC m=+0.075694977 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:33:28 localhost podman[81463]: 2025-12-05 08:33:28.226375878 +0000 UTC m=+0.109404391 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:33:28 localhost podman[81465]: 2025-12-05 08:33:28.281926753 +0000 UTC m=+0.161128189 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:33:28 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:33:28 localhost podman[81464]: 2025-12-05 08:33:28.249588263 +0000 UTC m=+0.132254545 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:33:28 localhost podman[81463]: 2025-12-05 08:33:28.311566093 +0000 UTC m=+0.194594556 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:33:28 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:33:28 localhost podman[81464]: 2025-12-05 08:33:28.331526079 +0000 UTC m=+0.214192331 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Dec 5 03:33:28 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:33:28 localhost podman[81466]: 2025-12-05 08:33:28.403562505 +0000 UTC m=+0.277395748 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, container_name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:33:28 localhost podman[81466]: 2025-12-05 08:33:28.42349397 +0000 UTC m=+0.297327213 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 5 03:33:28 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:33:31 localhost python3[81560]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 5 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:33:36 localhost podman[81561]: 2025-12-05 08:33:36.197587855 +0000 UTC m=+0.085678137 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:33:36 localhost podman[81561]: 2025-12-05 08:33:36.425948485 +0000 UTC m=+0.314038777 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, tcib_managed=true, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:33:36 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:33:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:33:51 localhost podman[81635]: 2025-12-05 08:33:51.203588701 +0000 UTC m=+0.090935578 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:33:51 localhost podman[81635]: 2025-12-05 08:33:51.236975075 +0000 UTC m=+0.124321942 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5) Dec 5 03:33:51 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:33:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:33:53 localhost podman[81663]: 2025-12-05 08:33:53.21433769 +0000 UTC m=+0.090644530 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible) Dec 5 03:33:53 localhost podman[81662]: 2025-12-05 08:33:53.268425867 +0000 UTC m=+0.148036498 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, distribution-scope=public, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:33:53 localhost podman[81663]: 2025-12-05 08:33:53.272658388 +0000 UTC m=+0.148965218 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:33:53 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:33:53 localhost podman[81661]: 2025-12-05 08:33:53.32263359 +0000 UTC m=+0.204984695 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:33:53 localhost podman[81662]: 2025-12-05 08:33:53.327177038 +0000 UTC m=+0.206787679 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git) Dec 5 03:33:53 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:33:53 localhost podman[81661]: 2025-12-05 08:33:53.357711425 +0000 UTC m=+0.240062500 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:33:53 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:33:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:33:54 localhost podman[81734]: 2025-12-05 08:33:54.182649963 +0000 UTC m=+0.073207214 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, release=1761123044, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:33:54 localhost podman[81734]: 2025-12-05 08:33:54.509721419 +0000 UTC m=+0.400278610 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:33:54 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:33:59 localhost podman[81759]: 2025-12-05 08:33:59.25690604 +0000 UTC m=+0.139271600 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 03:33:59 localhost podman[81767]: 2025-12-05 08:33:59.218483912 +0000 UTC m=+0.090902827 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:33:59 localhost podman[81761]: 2025-12-05 08:33:59.277694208 +0000 UTC m=+0.153017522 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, tcib_managed=true, vcs-type=git) Dec 5 03:33:59 localhost podman[81760]: 2025-12-05 08:33:59.186517573 +0000 UTC m=+0.067910823 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 5 03:33:59 localhost podman[81759]: 2025-12-05 08:33:59.29864457 +0000 UTC m=+0.181010080 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:33:59 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:33:59 localhost podman[81760]: 2025-12-05 08:33:59.320669645 +0000 UTC m=+0.202062915 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container) Dec 5 03:33:59 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:33:59 localhost podman[81761]: 2025-12-05 08:33:59.339482552 +0000 UTC m=+0.214805896 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=collectd) Dec 5 03:33:59 localhost podman[81767]: 2025-12-05 08:33:59.352372376 +0000 UTC m=+0.224791291 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:33:59 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:33:59 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:34:07 localhost systemd[1]: tmp-crun.Gu5OaL.mount: Deactivated successfully. Dec 5 03:34:07 localhost podman[81843]: 2025-12-05 08:34:07.209158281 +0000 UTC m=+0.096208390 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 5 03:34:07 localhost podman[81843]: 2025-12-05 08:34:07.417780536 +0000 UTC m=+0.304830635 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, container_name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 5 03:34:07 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:34:22 localhost podman[81875]: 2025-12-05 08:34:22.187628915 +0000 UTC m=+0.078180048 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5) Dec 5 03:34:22 localhost podman[81875]: 2025-12-05 08:34:22.243688293 +0000 UTC m=+0.134239396 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 5 03:34:22 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:34:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:34:24 localhost podman[81899]: 2025-12-05 08:34:24.207473592 +0000 UTC m=+0.097669045 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 5 03:34:24 localhost podman[81899]: 2025-12-05 08:34:24.262996723 +0000 UTC m=+0.153192126 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 03:34:24 localhost systemd[1]: tmp-crun.3JSQ8F.mount: Deactivated successfully. Dec 5 03:34:24 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:34:24 localhost podman[81901]: 2025-12-05 08:34:24.30854026 +0000 UTC m=+0.189200411 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:34:24 localhost podman[81900]: 2025-12-05 08:34:24.277018633 +0000 UTC m=+0.162147411 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:34:24 localhost podman[81901]: 2025-12-05 08:34:24.33759936 +0000 UTC m=+0.218259531 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, vcs-type=git) Dec 5 03:34:24 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:34:24 localhost podman[81900]: 2025-12-05 08:34:24.356688276 +0000 UTC m=+0.241817074 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z) Dec 5 03:34:24 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:34:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:34:25 localhost podman[81971]: 2025-12-05 08:34:25.1724089 +0000 UTC m=+0.063073313 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:34:25 localhost podman[81971]: 2025-12-05 08:34:25.511508285 +0000 UTC m=+0.402172718 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-nova-compute, release=1761123044, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Dec 5 03:34:25 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:34:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:34:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:34:30 localhost recover_tripleo_nova_virtqemud[82098]: 61294 Dec 5 03:34:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:34:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:34:30 localhost systemd[1]: tmp-crun.dTUAcq.mount: Deactivated successfully. Dec 5 03:34:30 localhost podman[82072]: 2025-12-05 08:34:30.235707662 +0000 UTC m=+0.123870738 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 5 03:34:30 localhost podman[82074]: 2025-12-05 08:34:30.185536195 +0000 UTC m=+0.073139104 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:34:30 localhost podman[82075]: 2025-12-05 08:34:30.213745159 +0000 UTC m=+0.095997854 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true) Dec 5 03:34:30 localhost podman[82072]: 2025-12-05 08:34:30.267774445 +0000 UTC m=+0.155937541 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, vcs-type=git) Dec 5 03:34:30 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:34:30 localhost podman[82075]: 2025-12-05 08:34:30.34786333 +0000 UTC m=+0.230116035 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Dec 5 03:34:30 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:34:30 localhost podman[82074]: 2025-12-05 08:34:30.372600348 +0000 UTC m=+0.260203207 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, com.redhat.component=openstack-collectd-container, container_name=collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:34:30 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:34:30 localhost podman[82073]: 2025-12-05 08:34:30.347732826 +0000 UTC m=+0.235068016 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 5 03:34:30 localhost podman[82073]: 2025-12-05 08:34:30.429794472 +0000 UTC m=+0.317129602 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:34:30 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:34:31 localhost systemd[1]: session-34.scope: Deactivated successfully. Dec 5 03:34:31 localhost systemd[1]: session-34.scope: Consumed 19.967s CPU time. Dec 5 03:34:31 localhost systemd-logind[760]: Session 34 logged out. Waiting for processes to exit. Dec 5 03:34:31 localhost systemd-logind[760]: Removed session 34. Dec 5 03:34:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:34:38 localhost podman[82162]: 2025-12-05 08:34:38.179909655 +0000 UTC m=+0.069967135 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com) Dec 5 03:34:38 localhost podman[82162]: 2025-12-05 08:34:38.404732427 +0000 UTC m=+0.294789907 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:34:38 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:34:44 localhost sshd[82237]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:34:44 localhost sshd[82238]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:34:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:34:53 localhost podman[82239]: 2025-12-05 08:34:53.199615383 +0000 UTC m=+0.086580646 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute) Dec 5 03:34:53 localhost podman[82239]: 2025-12-05 08:34:53.234983066 +0000 UTC m=+0.121948329 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:34:53 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:34:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:34:55 localhost podman[82265]: 2025-12-05 08:34:55.20123625 +0000 UTC m=+0.080089725 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 03:34:55 localhost podman[82264]: 2025-12-05 08:34:55.261933171 +0000 UTC m=+0.143983284 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12) Dec 5 03:34:55 localhost podman[82265]: 2025-12-05 08:34:55.289551838 +0000 UTC m=+0.168405343 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, release=1761123044, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12) Dec 5 03:34:55 localhost podman[82266]: 2025-12-05 08:34:55.319135224 +0000 UTC m=+0.193131191 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:34:55 localhost podman[82264]: 2025-12-05 08:34:55.323746156 +0000 UTC m=+0.205796289 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git) Dec 5 03:34:55 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:34:55 localhost podman[82266]: 2025-12-05 08:34:55.354681564 +0000 UTC m=+0.228677571 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:34:55 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:34:55 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:34:55 localhost sshd[82338]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:34:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:34:56 localhost systemd[1]: tmp-crun.4aSSAS.mount: Deactivated successfully. Dec 5 03:34:56 localhost podman[82340]: 2025-12-05 08:34:56.222114855 +0000 UTC m=+0.103647268 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 5 03:34:56 localhost podman[82340]: 2025-12-05 08:34:56.606532818 +0000 UTC m=+0.488065221 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12) Dec 5 03:34:56 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:35:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:35:01 localhost systemd[1]: tmp-crun.Wq3I5l.mount: Deactivated successfully. Dec 5 03:35:01 localhost podman[82363]: 2025-12-05 08:35:01.17363914 +0000 UTC m=+0.064809378 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:35:01 localhost podman[82364]: 2025-12-05 08:35:01.230810903 +0000 UTC m=+0.117720810 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, url=https://www.redhat.com, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.buildah.version=1.41.4) Dec 5 03:35:01 localhost podman[82364]: 2025-12-05 08:35:01.239603472 +0000 UTC m=+0.126513329 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 5 03:35:01 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:35:01 localhost podman[82363]: 2025-12-05 08:35:01.25681558 +0000 UTC m=+0.147985818 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, release=1761123044, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:35:01 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:35:01 localhost podman[82370]: 2025-12-05 08:35:01.211989336 +0000 UTC m=+0.093339982 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public) Dec 5 03:35:01 localhost podman[82375]: 2025-12-05 08:35:01.319548542 +0000 UTC m=+0.194949226 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044) Dec 5 03:35:01 localhost podman[82370]: 2025-12-05 08:35:01.345635973 +0000 UTC m=+0.226986569 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1) Dec 5 03:35:01 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:35:01 localhost podman[82375]: 2025-12-05 08:35:01.372711962 +0000 UTC m=+0.248112696 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:34:05Z, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Dec 5 03:35:01 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:35:02 localhost systemd[1]: tmp-crun.lx3QBK.mount: Deactivated successfully. Dec 5 03:35:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:35:09 localhost systemd[1]: tmp-crun.bkVcKB.mount: Deactivated successfully. Dec 5 03:35:09 localhost podman[82444]: 2025-12-05 08:35:09.193217875 +0000 UTC m=+0.081448278 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:35:09 localhost podman[82444]: 2025-12-05 08:35:09.408540026 +0000 UTC m=+0.296770409 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, release=1761123044, container_name=metrics_qdr, vendor=Red Hat, Inc.) Dec 5 03:35:09 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:35:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:35:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:35:24 localhost recover_tripleo_nova_virtqemud[82479]: 61294 Dec 5 03:35:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:35:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:35:24 localhost podman[82473]: 2025-12-05 08:35:24.190851126 +0000 UTC m=+0.082171591 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z) Dec 5 03:35:24 localhost podman[82473]: 2025-12-05 08:35:24.246704317 +0000 UTC m=+0.138024732 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044) Dec 5 03:35:24 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:35:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:35:26 localhost systemd[1]: tmp-crun.xrQ1po.mount: Deactivated successfully. Dec 5 03:35:26 localhost podman[82501]: 2025-12-05 08:35:26.204166902 +0000 UTC m=+0.086169673 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 5 03:35:26 localhost podman[82505]: 2025-12-05 08:35:26.268358689 +0000 UTC m=+0.146234763 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 5 03:35:26 localhost podman[82500]: 2025-12-05 08:35:26.178941078 +0000 UTC m=+0.068210491 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 03:35:26 localhost podman[82501]: 2025-12-05 08:35:26.289809807 +0000 UTC m=+0.171812618 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, vcs-type=git, release=1761123044, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4) Dec 5 03:35:26 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:35:26 localhost podman[82500]: 2025-12-05 08:35:26.312846793 +0000 UTC m=+0.202116216 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:35:26 localhost podman[82505]: 2025-12-05 08:35:26.321755486 +0000 UTC m=+0.199631550 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:35:26 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:35:26 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:35:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:35:27 localhost podman[82575]: 2025-12-05 08:35:27.175812307 +0000 UTC m=+0.067161010 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, container_name=nova_migration_target, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 5 03:35:27 localhost podman[82575]: 2025-12-05 08:35:27.507861446 +0000 UTC m=+0.399210139 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:35:27 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:35:29 localhost systemd[1]: tmp-crun.1dwuJB.mount: Deactivated successfully. Dec 5 03:35:29 localhost podman[82701]: 2025-12-05 08:35:29.050690229 +0000 UTC m=+0.059872326 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True) Dec 5 03:35:29 localhost podman[82701]: 2025-12-05 08:35:29.151852509 +0000 UTC m=+0.161034606 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, release=1763362218, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Dec 5 03:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:35:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:35:32 localhost podman[82967]: 2025-12-05 08:35:32.25592825 +0000 UTC m=+0.131121059 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:35:32 localhost systemd[1]: tmp-crun.VQEtlx.mount: Deactivated successfully. Dec 5 03:35:32 localhost podman[82968]: 2025-12-05 08:35:32.278613737 +0000 UTC m=+0.151657291 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:35:32 localhost podman[82970]: 2025-12-05 08:35:32.233732611 +0000 UTC m=+0.099030897 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, version=17.1.12) Dec 5 03:35:32 localhost podman[82968]: 2025-12-05 08:35:32.289470279 +0000 UTC m=+0.162513853 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z) Dec 5 03:35:32 localhost podman[82970]: 2025-12-05 08:35:32.314016742 +0000 UTC m=+0.179315028 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4) Dec 5 03:35:32 localhost podman[82969]: 2025-12-05 08:35:32.32045575 +0000 UTC m=+0.188703077 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=) Dec 5 03:35:32 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:35:32 localhost podman[82967]: 2025-12-05 08:35:32.32404953 +0000 UTC m=+0.199242419 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:35:32 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:35:32 localhost podman[82969]: 2025-12-05 08:35:32.335421028 +0000 UTC m=+0.203668305 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:35:32 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:35:32 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:35:33 localhost systemd[1]: tmp-crun.ti2ivV.mount: Deactivated successfully. Dec 5 03:35:36 localhost systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring. Dec 5 03:35:36 localhost systemd[1]: Created slice User Slice of UID 0. Dec 5 03:35:36 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 5 03:35:36 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 5 03:35:36 localhost systemd[1]: Starting User Manager for UID 0... Dec 5 03:35:36 localhost systemd[83304]: Queued start job for default target Main User Target. Dec 5 03:35:36 localhost systemd[83304]: Created slice User Application Slice. Dec 5 03:35:36 localhost systemd[83304]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 5 03:35:36 localhost systemd[83304]: Started Daily Cleanup of User's Temporary Directories. Dec 5 03:35:36 localhost systemd[83304]: Reached target Paths. Dec 5 03:35:36 localhost systemd[83304]: Reached target Timers. Dec 5 03:35:36 localhost systemd[83304]: Starting D-Bus User Message Bus Socket... Dec 5 03:35:36 localhost systemd[83304]: Starting Create User's Volatile Files and Directories... Dec 5 03:35:36 localhost systemd[83304]: Finished Create User's Volatile Files and Directories. Dec 5 03:35:36 localhost systemd[83304]: Listening on D-Bus User Message Bus Socket. Dec 5 03:35:36 localhost systemd[83304]: Reached target Sockets. Dec 5 03:35:36 localhost systemd[83304]: Reached target Basic System. Dec 5 03:35:36 localhost systemd[83304]: Reached target Main User Target. Dec 5 03:35:36 localhost systemd[83304]: Startup finished in 159ms. Dec 5 03:35:36 localhost systemd[1]: Started User Manager for UID 0. Dec 5 03:35:37 localhost systemd[1]: Started Session c11 of User root. Dec 5 03:35:38 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Dec 5 03:35:38 localhost kernel: device tapc2f95d81-23 entered promiscuous mode Dec 5 03:35:38 localhost NetworkManager[5960]: [1764923738.1498] manager: (tapc2f95d81-23): new Tun device (/org/freedesktop/NetworkManager/Devices/13) Dec 5 03:35:38 localhost systemd-udevd[83339]: Network interface NamePolicy= disabled on kernel command line. Dec 5 03:35:38 localhost NetworkManager[5960]: [1764923738.1808] device (tapc2f95d81-23): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 03:35:38 localhost NetworkManager[5960]: [1764923738.1819] device (tapc2f95d81-23): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 5 03:35:38 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 5 03:35:38 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Dec 5 03:35:38 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Dec 5 03:35:38 localhost systemd-machined[83348]: New machine qemu-1-instance-00000002. Dec 5 03:35:38 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000002. Dec 5 03:35:38 localhost NetworkManager[5960]: [1764923738.4769] manager: (tap86f5c13f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/14) Dec 5 03:35:38 localhost systemd-udevd[83337]: Network interface NamePolicy= disabled on kernel command line. Dec 5 03:35:38 localhost NetworkManager[5960]: [1764923738.5251] device (tap86f5c13f-30): carrier: link connected Dec 5 03:35:38 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap86f5c13f-31: link becomes ready Dec 5 03:35:38 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap86f5c13f-30: link becomes ready Dec 5 03:35:38 localhost kernel: device tap86f5c13f-30 entered promiscuous mode Dec 5 03:35:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:35:40 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 5 03:35:40 localhost systemd[1]: tmp-crun.yB7jvn.mount: Deactivated successfully. Dec 5 03:35:40 localhost podman[83445]: 2025-12-05 08:35:40.225089669 +0000 UTC m=+0.105941079 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:35:40 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 5 03:35:40 localhost podman[83445]: 2025-12-05 08:35:40.44463092 +0000 UTC m=+0.325482330 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:35:40 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:35:40 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Dec 5 03:35:40 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Dec 5 03:35:40 localhost podman[83506]: 2025-12-05 08:35:40.644549838 +0000 UTC m=+0.071900176 container create 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 5 03:35:40 localhost systemd[1]: Started libpod-conmon-2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7.scope. Dec 5 03:35:40 localhost podman[83506]: 2025-12-05 08:35:40.603969723 +0000 UTC m=+0.031320131 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 5 03:35:40 localhost systemd[1]: Started libcrun container. Dec 5 03:35:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b4fea30ca10e6234bb45ca683bec863e17e57041e353e748a5745e23567836/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 03:35:40 localhost podman[83506]: 2025-12-05 08:35:40.722438365 +0000 UTC m=+0.149788733 container init 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:35:40 localhost podman[83506]: 2025-12-05 08:35:40.73238491 +0000 UTC m=+0.159735248 container start 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 5 03:35:41 localhost setroubleshoot[83446]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count. For complete SELinux messages run: sealert -l 3c8dc1e6-9e80-435a-812f-bbc12a97dc29 Dec 5 03:35:41 localhost setroubleshoot[83446]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count.#012#012***** Plugin qemu_file_image (98.8 confidence) suggests *******************#012#012If max_map_count is a virtualization target#012Then you need to change the label on max_map_count'#012Do#012# semanage fcontext -a -t virt_image_t 'max_map_count'#012# restorecon -v 'max_map_count'#012#012***** Plugin catchall (2.13 confidence) suggests **************************#012#012If you believe that qemu-kvm should be allowed read access on the max_map_count file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'qemu-kvm' --raw | audit2allow -M my-qemukvm#012# semodule -X 300 -i my-qemukvm.pp#012 Dec 5 03:35:49 localhost snmpd[66746]: empty variable list in _query Dec 5 03:35:49 localhost snmpd[66746]: empty variable list in _query Dec 5 03:35:49 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 5 03:35:50 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Dec 5 03:35:51 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 5 03:35:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:35:55 localhost systemd[1]: tmp-crun.5MQSKD.mount: Deactivated successfully. Dec 5 03:35:55 localhost podman[83581]: 2025-12-05 08:35:55.228552359 +0000 UTC m=+0.106887618 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step5, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:35:55 localhost podman[83581]: 2025-12-05 08:35:55.254872145 +0000 UTC m=+0.133207414 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:35:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:35:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:35:57 localhost podman[83607]: 2025-12-05 08:35:57.203582282 +0000 UTC m=+0.087837753 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Dec 5 03:35:57 localhost podman[83608]: 2025-12-05 08:35:57.217532439 +0000 UTC m=+0.097110678 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond) Dec 5 03:35:57 localhost podman[83608]: 2025-12-05 08:35:57.226183044 +0000 UTC m=+0.105761273 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 5 03:35:57 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:35:57 localhost podman[83607]: 2025-12-05 08:35:57.241243296 +0000 UTC m=+0.125498767 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 03:35:57 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:35:57 localhost podman[83609]: 2025-12-05 08:35:57.326010415 +0000 UTC m=+0.202953823 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:35:57 localhost podman[83609]: 2025-12-05 08:35:57.380641429 +0000 UTC m=+0.257584827 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public) Dec 5 03:35:57 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44196 [05/Dec/2025:08:35:56.370] listener listener/metadata 0/0/0/1156/1156 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44200 [05/Dec/2025:08:35:57.627] listener listener/metadata 0/0/0/14/14 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44202 [05/Dec/2025:08:35:57.699] listener listener/metadata 0/0/0/14/14 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44212 [05/Dec/2025:08:35:57.763] listener listener/metadata 0/0/0/13/13 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44224 [05/Dec/2025:08:35:57.817] listener listener/metadata 0/0/0/12/12 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44236 [05/Dec/2025:08:35:57.873] listener listener/metadata 0/0/0/13/13 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Dec 5 03:35:57 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44250 [05/Dec/2025:08:35:57.960] listener listener/metadata 0/0/0/14/14 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44258 [05/Dec/2025:08:35:58.034] listener listener/metadata 0/0/0/14/14 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Dec 5 03:35:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44268 [05/Dec/2025:08:35:58.106] listener listener/metadata 0/0/0/15/15 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44270 [05/Dec/2025:08:35:58.212] listener listener/metadata 0/0/0/17/17 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Dec 5 03:35:58 localhost systemd[1]: tmp-crun.NWRViG.mount: Deactivated successfully. Dec 5 03:35:58 localhost podman[83682]: 2025-12-05 08:35:58.235541525 +0000 UTC m=+0.125959152 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44282 [05/Dec/2025:08:35:58.275] listener listener/metadata 0/0/0/13/13 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44298 [05/Dec/2025:08:35:58.318] listener listener/metadata 0/0/0/14/14 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44302 [05/Dec/2025:08:35:58.361] listener listener/metadata 0/0/0/13/13 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44306 [05/Dec/2025:08:35:58.409] listener listener/metadata 0/0/0/13/13 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44312 [05/Dec/2025:08:35:58.467] listener listener/metadata 0/0/0/13/13 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Dec 5 03:35:58 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[83528]: 192.168.0.214:44324 [05/Dec/2025:08:35:58.521] listener listener/metadata 0/0/0/13/13 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Dec 5 03:35:58 localhost podman[83682]: 2025-12-05 08:35:58.604702972 +0000 UTC m=+0.495120609 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:35:58 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:36:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:36:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:36:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:36:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:36:03 localhost systemd[1]: tmp-crun.NZiF0c.mount: Deactivated successfully. Dec 5 03:36:03 localhost podman[83706]: 2025-12-05 08:36:03.244895423 +0000 UTC m=+0.125961333 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, container_name=iscsid) Dec 5 03:36:03 localhost podman[83718]: 2025-12-05 08:36:03.298235789 +0000 UTC m=+0.167821566 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, release=1761123044, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:36:03 localhost podman[83705]: 2025-12-05 08:36:03.249776402 +0000 UTC m=+0.132686518 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:36:03 localhost podman[83718]: 2025-12-05 08:36:03.319475309 +0000 UTC m=+0.189061126 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git) Dec 5 03:36:03 localhost podman[83706]: 2025-12-05 08:36:03.326521006 +0000 UTC m=+0.207586916 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=) Dec 5 03:36:03 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:36:03 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:36:03 localhost podman[83707]: 2025-12-05 08:36:03.280303368 +0000 UTC m=+0.153499586 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git) Dec 5 03:36:03 localhost podman[83707]: 2025-12-05 08:36:03.413739169 +0000 UTC m=+0.286935397 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:36:03 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:36:03 localhost podman[83705]: 2025-12-05 08:36:03.4330204 +0000 UTC m=+0.315930496 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:36:03 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:36:04 localhost systemd[1]: tmp-crun.g3IjAJ.mount: Deactivated successfully. Dec 5 03:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:36:11 localhost systemd[1]: tmp-crun.41Q3zX.mount: Deactivated successfully. Dec 5 03:36:11 localhost podman[83783]: 2025-12-05 08:36:11.204467858 +0000 UTC m=+0.093084994 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 5 03:36:11 localhost podman[83783]: 2025-12-05 08:36:11.405674256 +0000 UTC m=+0.294291362 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:36:11 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:36:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:36:26 localhost podman[83812]: 2025-12-05 08:36:26.198346702 +0000 UTC m=+0.084523252 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:36:26 localhost podman[83812]: 2025-12-05 08:36:26.227645941 +0000 UTC m=+0.113822431 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 5 03:36:26 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:36:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:36:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:36:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:36:28 localhost systemd[1]: tmp-crun.fO0jA2.mount: Deactivated successfully. Dec 5 03:36:28 localhost systemd[1]: tmp-crun.Nlr8tU.mount: Deactivated successfully. Dec 5 03:36:28 localhost podman[83840]: 2025-12-05 08:36:28.224081479 +0000 UTC m=+0.108394344 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron) Dec 5 03:36:28 localhost podman[83840]: 2025-12-05 08:36:28.254680737 +0000 UTC m=+0.138993582 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, url=https://www.redhat.com, container_name=logrotate_crond, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, release=1761123044, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:36:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:36:28 localhost podman[83842]: 2025-12-05 08:36:28.274371951 +0000 UTC m=+0.156839029 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z) Dec 5 03:36:28 localhost podman[83839]: 2025-12-05 08:36:28.192858022 +0000 UTC m=+0.087960637 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=) Dec 5 03:36:28 localhost podman[83839]: 2025-12-05 08:36:28.325230239 +0000 UTC m=+0.220332874 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64) Dec 5 03:36:28 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:36:28 localhost podman[83842]: 2025-12-05 08:36:28.375221082 +0000 UTC m=+0.257688160 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 03:36:28 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:36:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:36:29 localhost systemd[1]: tmp-crun.MVF0Wi.mount: Deactivated successfully. Dec 5 03:36:29 localhost podman[83913]: 2025-12-05 08:36:29.202070928 +0000 UTC m=+0.083712547 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, batch=17.1_20251118.1) Dec 5 03:36:29 localhost podman[83913]: 2025-12-05 08:36:29.571034769 +0000 UTC m=+0.452676468 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_migration_target, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute) Dec 5 03:36:29 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:36:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:36:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:36:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:36:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:36:34 localhost systemd[1]: tmp-crun.6uEzLd.mount: Deactivated successfully. Dec 5 03:36:34 localhost podman[84013]: 2025-12-05 08:36:34.271323411 +0000 UTC m=+0.152317631 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4) Dec 5 03:36:34 localhost podman[84016]: 2025-12-05 08:36:34.235573485 +0000 UTC m=+0.110354324 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:36:34 localhost podman[84015]: 2025-12-05 08:36:34.306328433 +0000 UTC m=+0.187687434 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 5 03:36:34 localhost podman[84013]: 2025-12-05 08:36:34.314400031 +0000 UTC m=+0.195394181 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Dec 5 03:36:34 localhost podman[84015]: 2025-12-05 08:36:34.321571641 +0000 UTC m=+0.202930642 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, release=1761123044, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd) Dec 5 03:36:34 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:36:34 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:36:34 localhost podman[84016]: 2025-12-05 08:36:34.372018208 +0000 UTC m=+0.246798987 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:36:34 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:36:34 localhost podman[84014]: 2025-12-05 08:36:34.453937479 +0000 UTC m=+0.334164435 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 5 03:36:34 localhost podman[84014]: 2025-12-05 08:36:34.465531834 +0000 UTC m=+0.345758790 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:36:34 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:36:35 localhost systemd[1]: tmp-crun.u9fdVn.mount: Deactivated successfully. Dec 5 03:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:36:42 localhost podman[84097]: 2025-12-05 08:36:42.176812178 +0000 UTC m=+0.067556442 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 5 03:36:42 localhost podman[84097]: 2025-12-05 08:36:42.401701842 +0000 UTC m=+0.292446136 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git) Dec 5 03:36:42 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:36:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:36:57 localhost systemd[1]: tmp-crun.ZYSTnt.mount: Deactivated successfully. Dec 5 03:36:57 localhost podman[84172]: 2025-12-05 08:36:57.203425359 +0000 UTC m=+0.086483862 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_compute) Dec 5 03:36:57 localhost podman[84172]: 2025-12-05 08:36:57.257087514 +0000 UTC m=+0.140146047 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:36:57 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:36:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:36:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:36:59 localhost recover_tripleo_nova_virtqemud[84218]: 61294 Dec 5 03:36:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:36:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:36:59 localhost systemd[1]: tmp-crun.o6pYhS.mount: Deactivated successfully. Dec 5 03:36:59 localhost podman[84199]: 2025-12-05 08:36:59.213019001 +0000 UTC m=+0.098169050 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:36:59 localhost podman[84199]: 2025-12-05 08:36:59.24660426 +0000 UTC m=+0.131754319 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team) Dec 5 03:36:59 localhost systemd[1]: tmp-crun.rMqJL6.mount: Deactivated successfully. Dec 5 03:36:59 localhost podman[84201]: 2025-12-05 08:36:59.260668572 +0000 UTC m=+0.139255570 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 03:36:59 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:36:59 localhost podman[84200]: 2025-12-05 08:36:59.310302143 +0000 UTC m=+0.190035086 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:36:59 localhost podman[84201]: 2025-12-05 08:36:59.318839955 +0000 UTC m=+0.197426913 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12) Dec 5 03:36:59 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:36:59 localhost podman[84200]: 2025-12-05 08:36:59.372293543 +0000 UTC m=+0.252026547 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:36:59 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:37:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:37:00 localhost podman[84272]: 2025-12-05 08:37:00.195131968 +0000 UTC m=+0.084173982 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:37:00 localhost podman[84272]: 2025-12-05 08:37:00.592767677 +0000 UTC m=+0.481809721 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_migration_target, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4) Dec 5 03:37:00 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:37:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:37:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:37:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:37:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:37:05 localhost systemd[1]: tmp-crun.D2Lgh2.mount: Deactivated successfully. Dec 5 03:37:05 localhost podman[84297]: 2025-12-05 08:37:05.246967517 +0000 UTC m=+0.134803093 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, container_name=collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:37:05 localhost podman[84297]: 2025-12-05 08:37:05.256659524 +0000 UTC m=+0.144495190 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, build-date=2025-11-18T22:51:28Z) Dec 5 03:37:05 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:37:05 localhost podman[84296]: 2025-12-05 08:37:05.292020688 +0000 UTC m=+0.181399232 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 5 03:37:05 localhost podman[84295]: 2025-12-05 08:37:05.303942953 +0000 UTC m=+0.193639327 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:37:05 localhost podman[84298]: 2025-12-05 08:37:05.221345582 +0000 UTC m=+0.105929579 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:37:05 localhost podman[84296]: 2025-12-05 08:37:05.356058531 +0000 UTC m=+0.245437035 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, distribution-scope=public, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-iscsid-container) Dec 5 03:37:05 localhost podman[84298]: 2025-12-05 08:37:05.355638228 +0000 UTC m=+0.240222185 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller) Dec 5 03:37:05 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:37:05 localhost podman[84295]: 2025-12-05 08:37:05.372695141 +0000 UTC m=+0.262391495 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:37:05 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:37:05 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:37:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:37:13 localhost podman[84382]: 2025-12-05 08:37:13.208656667 +0000 UTC m=+0.088168094 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-type=git, version=17.1.12, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, architecture=x86_64, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:37:13 localhost podman[84382]: 2025-12-05 08:37:13.450233922 +0000 UTC m=+0.329745309 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:37:13 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:37:28 localhost podman[84412]: 2025-12-05 08:37:28.196207999 +0000 UTC m=+0.082231031 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044) Dec 5 03:37:28 localhost podman[84412]: 2025-12-05 08:37:28.226365774 +0000 UTC m=+0.112388826 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=nova_compute, distribution-scope=public, vcs-type=git) Dec 5 03:37:28 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:37:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:37:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:37:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:37:30 localhost systemd[1]: tmp-crun.pDtsBb.mount: Deactivated successfully. Dec 5 03:37:30 localhost podman[84438]: 2025-12-05 08:37:30.215544091 +0000 UTC m=+0.099508671 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:37:30 localhost podman[84438]: 2025-12-05 08:37:30.241632991 +0000 UTC m=+0.125597591 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, tcib_managed=true) Dec 5 03:37:30 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:37:30 localhost podman[84440]: 2025-12-05 08:37:30.25791634 +0000 UTC m=+0.133619397 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:37:30 localhost podman[84439]: 2025-12-05 08:37:30.181310962 +0000 UTC m=+0.063325632 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond) Dec 5 03:37:30 localhost podman[84440]: 2025-12-05 08:37:30.302715703 +0000 UTC m=+0.178418740 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:37:30 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:37:30 localhost podman[84439]: 2025-12-05 08:37:30.315694312 +0000 UTC m=+0.197708962 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, tcib_managed=true, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.buildah.version=1.41.4, name=rhosp17/openstack-cron) Dec 5 03:37:30 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:37:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:37:31 localhost systemd[1]: tmp-crun.93Cx0Q.mount: Deactivated successfully. Dec 5 03:37:31 localhost podman[84513]: 2025-12-05 08:37:31.21568906 +0000 UTC m=+0.100531893 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:37:31 localhost podman[84513]: 2025-12-05 08:37:31.607012396 +0000 UTC m=+0.491855229 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 5 03:37:31 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:37:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:37:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:37:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:37:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:37:36 localhost podman[84616]: 2025-12-05 08:37:36.212596446 +0000 UTC m=+0.093166467 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:37:36 localhost podman[84616]: 2025-12-05 08:37:36.223851901 +0000 UTC m=+0.104421962 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-type=git, distribution-scope=public) Dec 5 03:37:36 localhost systemd[1]: tmp-crun.nkNkYo.mount: Deactivated successfully. Dec 5 03:37:36 localhost podman[84617]: 2025-12-05 08:37:36.274434482 +0000 UTC m=+0.152543408 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:37:36 localhost podman[84617]: 2025-12-05 08:37:36.283538621 +0000 UTC m=+0.161647517 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1) Dec 5 03:37:36 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:37:36 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:37:36 localhost podman[84615]: 2025-12-05 08:37:36.365472012 +0000 UTC m=+0.247933451 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:37:36 localhost podman[84618]: 2025-12-05 08:37:36.411768962 +0000 UTC m=+0.286275967 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 5 03:37:36 localhost podman[84618]: 2025-12-05 08:37:36.435031194 +0000 UTC m=+0.309538259 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Dec 5 03:37:36 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:37:36 localhost podman[84615]: 2025-12-05 08:37:36.490483014 +0000 UTC m=+0.372944483 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, container_name=ovn_metadata_agent, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:37:36 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:37:44 localhost podman[84747]: 2025-12-05 08:37:44.198997262 +0000 UTC m=+0.085191503 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd) Dec 5 03:37:44 localhost podman[84747]: 2025-12-05 08:37:44.405536273 +0000 UTC m=+0.291730454 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd) Dec 5 03:37:44 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:37:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:37:59 localhost podman[84777]: 2025-12-05 08:37:59.213009614 +0000 UTC m=+0.095626772 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=nova_compute, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 5 03:37:59 localhost podman[84777]: 2025-12-05 08:37:59.246646225 +0000 UTC m=+0.129263453 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 5 03:37:59 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:38:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:38:01 localhost systemd[1]: tmp-crun.kzAdjl.mount: Deactivated successfully. Dec 5 03:38:01 localhost podman[84803]: 2025-12-05 08:38:01.21717481 +0000 UTC m=+0.100569703 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z) Dec 5 03:38:01 localhost systemd[1]: tmp-crun.LtdEFy.mount: Deactivated successfully. Dec 5 03:38:01 localhost podman[84805]: 2025-12-05 08:38:01.271463255 +0000 UTC m=+0.148932586 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 5 03:38:01 localhost podman[84805]: 2025-12-05 08:38:01.321683705 +0000 UTC m=+0.199152996 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc.) Dec 5 03:38:01 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:38:01 localhost podman[84804]: 2025-12-05 08:38:01.322415747 +0000 UTC m=+0.202242061 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, vcs-type=git, container_name=logrotate_crond, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 5 03:38:01 localhost podman[84804]: 2025-12-05 08:38:01.401984145 +0000 UTC m=+0.281810479 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, name=rhosp17/openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 5 03:38:01 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:38:01 localhost podman[84803]: 2025-12-05 08:38:01.425977201 +0000 UTC m=+0.309372144 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:38:01 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:38:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:38:02 localhost podman[84873]: 2025-12-05 08:38:02.194407516 +0000 UTC m=+0.080222209 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12) Dec 5 03:38:02 localhost podman[84873]: 2025-12-05 08:38:02.544137147 +0000 UTC m=+0.429951790 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Dec 5 03:38:02 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:38:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:38:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:38:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:38:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:38:07 localhost podman[84898]: 2025-12-05 08:38:07.196744139 +0000 UTC m=+0.075291949 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public) Dec 5 03:38:07 localhost podman[84898]: 2025-12-05 08:38:07.25157935 +0000 UTC m=+0.130127200 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:38:07 localhost systemd[1]: tmp-crun.HBjWrH.mount: Deactivated successfully. Dec 5 03:38:07 localhost podman[84896]: 2025-12-05 08:38:07.26756248 +0000 UTC m=+0.150391291 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 03:38:07 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:38:07 localhost podman[84896]: 2025-12-05 08:38:07.301863661 +0000 UTC m=+0.184692542 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:38:07 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:38:07 localhost podman[84897]: 2025-12-05 08:38:07.259697278 +0000 UTC m=+0.139122085 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, container_name=iscsid, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:38:07 localhost podman[84899]: 2025-12-05 08:38:07.379356217 +0000 UTC m=+0.247541589 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, version=17.1.12) Dec 5 03:38:07 localhost podman[84897]: 2025-12-05 08:38:07.390739606 +0000 UTC m=+0.270164433 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc.) Dec 5 03:38:07 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:38:07 localhost podman[84899]: 2025-12-05 08:38:07.430851036 +0000 UTC m=+0.299036348 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 03:38:07 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:38:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:38:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:38:15 localhost recover_tripleo_nova_virtqemud[84990]: 61294 Dec 5 03:38:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:38:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:38:15 localhost podman[84983]: 2025-12-05 08:38:15.211490295 +0000 UTC m=+0.096411066 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:38:15 localhost podman[84983]: 2025-12-05 08:38:15.404407739 +0000 UTC m=+0.289328530 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, config_id=tripleo_step1, architecture=x86_64, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Dec 5 03:38:15 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:38:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:38:30 localhost podman[85015]: 2025-12-05 08:38:30.206869698 +0000 UTC m=+0.091142665 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z) Dec 5 03:38:30 localhost podman[85015]: 2025-12-05 08:38:30.243683156 +0000 UTC m=+0.127956203 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, build-date=2025-11-19T00:36:58Z) Dec 5 03:38:30 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:38:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:38:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:38:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:38:32 localhost systemd[1]: tmp-crun.SeTQPQ.mount: Deactivated successfully. Dec 5 03:38:32 localhost podman[85043]: 2025-12-05 08:38:32.216909694 +0000 UTC m=+0.097462969 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 5 03:38:32 localhost podman[85041]: 2025-12-05 08:38:32.268330161 +0000 UTC m=+0.150433933 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute) Dec 5 03:38:32 localhost podman[85043]: 2025-12-05 08:38:32.274690446 +0000 UTC m=+0.155243701 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible) Dec 5 03:38:32 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:38:32 localhost podman[85042]: 2025-12-05 08:38:32.363453677 +0000 UTC m=+0.245216168 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:38:32 localhost podman[85041]: 2025-12-05 08:38:32.379523129 +0000 UTC m=+0.261626901 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 5 03:38:32 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:38:32 localhost podman[85042]: 2025-12-05 08:38:32.39815566 +0000 UTC m=+0.279918151 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12) Dec 5 03:38:32 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:38:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:38:33 localhost podman[85113]: 2025-12-05 08:38:33.202049804 +0000 UTC m=+0.087569796 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 5 03:38:33 localhost podman[85113]: 2025-12-05 08:38:33.575676166 +0000 UTC m=+0.461196068 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 5 03:38:33 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:38:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:38:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:38:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:38:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:38:38 localhost systemd[1]: tmp-crun.qGqW2i.mount: Deactivated successfully. Dec 5 03:38:38 localhost podman[85217]: 2025-12-05 08:38:38.215800696 +0000 UTC m=+0.091323110 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:38:38 localhost podman[85216]: 2025-12-05 08:38:38.259183086 +0000 UTC m=+0.134444922 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 5 03:38:38 localhost podman[85216]: 2025-12-05 08:38:38.272548936 +0000 UTC m=+0.147810582 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 5 03:38:38 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:38:38 localhost podman[85217]: 2025-12-05 08:38:38.290081213 +0000 UTC m=+0.165603657 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller) Dec 5 03:38:38 localhost podman[85215]: 2025-12-05 08:38:38.311384656 +0000 UTC m=+0.189433508 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=iscsid) Dec 5 03:38:38 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:38:38 localhost podman[85215]: 2025-12-05 08:38:38.321535657 +0000 UTC m=+0.199584529 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 5 03:38:38 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:38:38 localhost podman[85214]: 2025-12-05 08:38:38.364374201 +0000 UTC m=+0.246165947 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, container_name=ovn_metadata_agent) Dec 5 03:38:38 localhost podman[85214]: 2025-12-05 08:38:38.399012622 +0000 UTC m=+0.280804428 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:38:38 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:38:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:38:46 localhost podman[85346]: 2025-12-05 08:38:46.193366381 +0000 UTC m=+0.076027461 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:38:46 localhost podman[85346]: 2025-12-05 08:38:46.384511431 +0000 UTC m=+0.267172541 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 5 03:38:46 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:39:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:39:01 localhost systemd[1]: tmp-crun.sRDiRO.mount: Deactivated successfully. Dec 5 03:39:01 localhost podman[85377]: 2025-12-05 08:39:01.199518747 +0000 UTC m=+0.085183642 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git) Dec 5 03:39:01 localhost podman[85377]: 2025-12-05 08:39:01.22506492 +0000 UTC m=+0.110729835 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 5 03:39:01 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:39:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:39:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:39:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:39:03 localhost systemd[1]: tmp-crun.0OQZHu.mount: Deactivated successfully. Dec 5 03:39:03 localhost podman[85405]: 2025-12-05 08:39:03.232382663 +0000 UTC m=+0.100632286 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:39:03 localhost podman[85405]: 2025-12-05 08:39:03.284770718 +0000 UTC m=+0.153020341 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64) Dec 5 03:39:03 localhost podman[85403]: 2025-12-05 08:39:03.322225257 +0000 UTC m=+0.197577918 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:39:03 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:39:03 localhost podman[85403]: 2025-12-05 08:39:03.382843115 +0000 UTC m=+0.258195836 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044) Dec 5 03:39:03 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:39:03 localhost podman[85404]: 2025-12-05 08:39:03.477835537 +0000 UTC m=+0.348358730 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, distribution-scope=public) Dec 5 03:39:03 localhost podman[85404]: 2025-12-05 08:39:03.508832827 +0000 UTC m=+0.379355990 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4) Dec 5 03:39:03 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:39:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:39:04 localhost podman[85477]: 2025-12-05 08:39:04.19104492 +0000 UTC m=+0.081264912 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_migration_target) Dec 5 03:39:04 localhost podman[85477]: 2025-12-05 08:39:04.610644803 +0000 UTC m=+0.500864785 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:39:04 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:39:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:39:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:39:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:39:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:39:09 localhost systemd[1]: tmp-crun.9hzksr.mount: Deactivated successfully. Dec 5 03:39:09 localhost podman[85501]: 2025-12-05 08:39:09.253703182 +0000 UTC m=+0.131240324 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Dec 5 03:39:09 localhost podman[85501]: 2025-12-05 08:39:09.261635736 +0000 UTC m=+0.139172898 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public) Dec 5 03:39:09 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:39:09 localhost podman[85500]: 2025-12-05 08:39:09.30940134 +0000 UTC m=+0.186860450 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:39:09 localhost podman[85508]: 2025-12-05 08:39:09.220840525 +0000 UTC m=+0.090573707 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Dec 5 03:39:09 localhost podman[85508]: 2025-12-05 08:39:09.352080448 +0000 UTC m=+0.221813670 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 5 03:39:09 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:39:09 localhost podman[85502]: 2025-12-05 08:39:09.405167045 +0000 UTC m=+0.280407986 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4) Dec 5 03:39:09 localhost podman[85502]: 2025-12-05 08:39:09.41577166 +0000 UTC m=+0.291012651 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, release=1761123044, architecture=x86_64, distribution-scope=public, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:39:09 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:39:09 localhost podman[85500]: 2025-12-05 08:39:09.456144668 +0000 UTC m=+0.333603828 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:39:09 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:39:10 localhost systemd[1]: tmp-crun.x2UUbL.mount: Deactivated successfully. Dec 5 03:39:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:39:17 localhost systemd[1]: tmp-crun.5FjZ1T.mount: Deactivated successfully. Dec 5 03:39:17 localhost podman[85583]: 2025-12-05 08:39:17.211013156 +0000 UTC m=+0.097648464 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd) Dec 5 03:39:17 localhost podman[85583]: 2025-12-05 08:39:17.436967522 +0000 UTC m=+0.323602840 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:39:17 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:39:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:39:32 localhost systemd[1]: tmp-crun.Fgt1Yy.mount: Deactivated successfully. Dec 5 03:39:32 localhost podman[85613]: 2025-12-05 08:39:32.200399353 +0000 UTC m=+0.087606566 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step5) Dec 5 03:39:32 localhost podman[85613]: 2025-12-05 08:39:32.226519355 +0000 UTC m=+0.113726488 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vcs-type=git, build-date=2025-11-19T00:36:58Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Dec 5 03:39:32 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:39:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:39:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:39:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:39:34 localhost podman[85641]: 2025-12-05 08:39:34.193482861 +0000 UTC m=+0.065823879 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 5 03:39:34 localhost podman[85641]: 2025-12-05 08:39:34.246394613 +0000 UTC m=+0.118735631 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:39:34 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:39:34 localhost podman[85639]: 2025-12-05 08:39:34.263300231 +0000 UTC m=+0.144996376 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 5 03:39:34 localhost podman[85639]: 2025-12-05 08:39:34.292551047 +0000 UTC m=+0.174247192 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 5 03:39:34 localhost podman[85640]: 2025-12-05 08:39:34.309949961 +0000 UTC m=+0.188834910 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, vcs-type=git, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:39:34 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:39:34 localhost podman[85640]: 2025-12-05 08:39:34.341211189 +0000 UTC m=+0.220096158 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Dec 5 03:39:34 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:39:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:39:35 localhost systemd[1]: tmp-crun.4QyXPY.mount: Deactivated successfully. Dec 5 03:39:35 localhost podman[85712]: 2025-12-05 08:39:35.192408751 +0000 UTC m=+0.079555489 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, url=https://www.redhat.com) Dec 5 03:39:35 localhost podman[85712]: 2025-12-05 08:39:35.595827548 +0000 UTC m=+0.482974326 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_migration_target, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team) Dec 5 03:39:35 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:39:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:39:40 localhost systemd[1]: tmp-crun.R96EdM.mount: Deactivated successfully. Dec 5 03:39:40 localhost podman[85812]: 2025-12-05 08:39:40.232522831 +0000 UTC m=+0.108748044 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 5 03:39:40 localhost systemd[1]: tmp-crun.n5DdDh.mount: Deactivated successfully. Dec 5 03:39:40 localhost podman[85814]: 2025-12-05 08:39:40.279723088 +0000 UTC m=+0.156222899 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:39:40 localhost podman[85814]: 2025-12-05 08:39:40.288513818 +0000 UTC m=+0.165013629 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:39:40 localhost podman[85812]: 2025-12-05 08:39:40.288862549 +0000 UTC m=+0.165087722 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc.) Dec 5 03:39:40 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:39:40 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:39:40 localhost podman[85813]: 2025-12-05 08:39:40.431198161 +0000 UTC m=+0.307959231 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Dec 5 03:39:40 localhost podman[85813]: 2025-12-05 08:39:40.442211849 +0000 UTC m=+0.318972889 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 5 03:39:40 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:39:40 localhost podman[85815]: 2025-12-05 08:39:40.528713501 +0000 UTC m=+0.404177831 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller) Dec 5 03:39:40 localhost podman[85815]: 2025-12-05 08:39:40.555017318 +0000 UTC m=+0.430481668 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:39:40 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:39:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:39:48 localhost podman[85940]: 2025-12-05 08:39:48.220502125 +0000 UTC m=+0.106070992 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:39:48 localhost podman[85940]: 2025-12-05 08:39:48.44055133 +0000 UTC m=+0.326120167 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4) Dec 5 03:39:48 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:39:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:39:59 localhost recover_tripleo_nova_virtqemud[85969]: 61294 Dec 5 03:39:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:39:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:40:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:40:03 localhost podman[85970]: 2025-12-05 08:40:03.213301205 +0000 UTC m=+0.098649025 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 5 03:40:03 localhost podman[85970]: 2025-12-05 08:40:03.268804786 +0000 UTC m=+0.154152596 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 5 03:40:03 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:40:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:40:05 localhost systemd[1]: tmp-crun.JN6yzV.mount: Deactivated successfully. Dec 5 03:40:05 localhost podman[85997]: 2025-12-05 08:40:05.210501247 +0000 UTC m=+0.085630137 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi) Dec 5 03:40:05 localhost podman[85995]: 2025-12-05 08:40:05.215797949 +0000 UTC m=+0.102074720 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible) Dec 5 03:40:05 localhost podman[85996]: 2025-12-05 08:40:05.263148801 +0000 UTC m=+0.140215280 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12) Dec 5 03:40:05 localhost podman[85997]: 2025-12-05 08:40:05.268638978 +0000 UTC m=+0.143767868 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 5 03:40:05 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:40:05 localhost podman[85996]: 2025-12-05 08:40:05.296827123 +0000 UTC m=+0.173893662 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:40:05 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:40:05 localhost podman[85995]: 2025-12-05 08:40:05.323045027 +0000 UTC m=+0.209321748 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044) Dec 5 03:40:05 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:40:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:40:06 localhost podman[86066]: 2025-12-05 08:40:06.202163715 +0000 UTC m=+0.089789333 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:40:06 localhost podman[86066]: 2025-12-05 08:40:06.580110751 +0000 UTC m=+0.467736449 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:40:06 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:40:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:40:11 localhost podman[86091]: 2025-12-05 08:40:11.217008272 +0000 UTC m=+0.092789175 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, container_name=iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:40:11 localhost podman[86091]: 2025-12-05 08:40:11.227850105 +0000 UTC m=+0.103630998 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:40:11 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:40:11 localhost systemd[1]: tmp-crun.UTsxHZ.mount: Deactivated successfully. Dec 5 03:40:11 localhost podman[86090]: 2025-12-05 08:40:11.334154583 +0000 UTC m=+0.211837045 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent) Dec 5 03:40:11 localhost podman[86090]: 2025-12-05 08:40:11.385763465 +0000 UTC m=+0.263445967 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64) Dec 5 03:40:11 localhost podman[86093]: 2025-12-05 08:40:11.426158883 +0000 UTC m=+0.293168397 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, release=1761123044, container_name=ovn_controller, vcs-type=git, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:40:11 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:40:11 localhost podman[86093]: 2025-12-05 08:40:11.479654664 +0000 UTC m=+0.346664148 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 5 03:40:11 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:40:11 localhost podman[86092]: 2025-12-05 08:40:11.531861843 +0000 UTC m=+0.403950513 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Dec 5 03:40:11 localhost podman[86092]: 2025-12-05 08:40:11.546535503 +0000 UTC m=+0.418624153 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team) Dec 5 03:40:11 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:40:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:40:19 localhost podman[86176]: 2025-12-05 08:40:19.195758684 +0000 UTC m=+0.080063676 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:40:19 localhost podman[86176]: 2025-12-05 08:40:19.382186239 +0000 UTC m=+0.266491211 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd) Dec 5 03:40:19 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:40:22 localhost sshd[86205]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:40:29 localhost sshd[86207]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:40:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:40:34 localhost podman[86209]: 2025-12-05 08:40:34.204537786 +0000 UTC m=+0.089884757 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:40:34 localhost podman[86209]: 2025-12-05 08:40:34.267792195 +0000 UTC m=+0.153139246 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 5 03:40:34 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:40:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:40:36 localhost systemd[1]: tmp-crun.L7F1sl.mount: Deactivated successfully. Dec 5 03:40:36 localhost podman[86237]: 2025-12-05 08:40:36.218191752 +0000 UTC m=+0.096017734 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:40:36 localhost podman[86237]: 2025-12-05 08:40:36.273455306 +0000 UTC m=+0.151281318 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:40:36 localhost podman[86235]: 2025-12-05 08:40:36.278600184 +0000 UTC m=+0.162786401 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, tcib_managed=true, release=1761123044, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:40:36 localhost podman[86236]: 2025-12-05 08:40:36.32741403 +0000 UTC m=+0.208442630 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 5 03:40:36 localhost podman[86235]: 2025-12-05 08:40:36.331675661 +0000 UTC m=+0.215861908 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:40:36 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:40:36 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:40:36 localhost podman[86236]: 2025-12-05 08:40:36.408541557 +0000 UTC m=+0.289570147 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:40:36 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:40:37 localhost systemd[83304]: Created slice User Background Tasks Slice. Dec 5 03:40:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:40:37 localhost systemd[83304]: Starting Cleanup of User's Temporary Files and Directories... Dec 5 03:40:37 localhost systemd[83304]: Finished Cleanup of User's Temporary Files and Directories. Dec 5 03:40:37 localhost podman[86313]: 2025-12-05 08:40:37.180512622 +0000 UTC m=+0.070637306 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_migration_target, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:40:37 localhost podman[86313]: 2025-12-05 08:40:37.54646924 +0000 UTC m=+0.436593914 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target, version=17.1.12, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 5 03:40:37 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:40:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:40:42 localhost systemd[1]: tmp-crun.QFK0ny.mount: Deactivated successfully. Dec 5 03:40:42 localhost systemd[1]: tmp-crun.upauJ3.mount: Deactivated successfully. Dec 5 03:40:42 localhost podman[86414]: 2025-12-05 08:40:42.228073562 +0000 UTC m=+0.110739277 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64) Dec 5 03:40:42 localhost podman[86411]: 2025-12-05 08:40:42.183751543 +0000 UTC m=+0.076641650 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true) Dec 5 03:40:42 localhost podman[86412]: 2025-12-05 08:40:42.23879369 +0000 UTC m=+0.131297336 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public) Dec 5 03:40:42 localhost podman[86411]: 2025-12-05 08:40:42.26783542 +0000 UTC m=+0.160725517 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:40:42 localhost podman[86412]: 2025-12-05 08:40:42.274199765 +0000 UTC m=+0.166703401 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:40:42 localhost podman[86414]: 2025-12-05 08:40:42.274472923 +0000 UTC m=+0.157138648 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, container_name=ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:40:42 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:40:42 localhost podman[86413]: 2025-12-05 08:40:42.282220111 +0000 UTC m=+0.170896430 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 5 03:40:42 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:40:42 localhost podman[86413]: 2025-12-05 08:40:42.318032299 +0000 UTC m=+0.206708598 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z) Dec 5 03:40:42 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:40:42 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:40:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:40:50 localhost podman[86541]: 2025-12-05 08:40:50.267490722 +0000 UTC m=+0.147470101 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 5 03:40:50 localhost podman[86541]: 2025-12-05 08:40:50.456718873 +0000 UTC m=+0.336698292 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:40:50 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:41:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:41:05 localhost podman[86570]: 2025-12-05 08:41:05.207989644 +0000 UTC m=+0.092803566 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Dec 5 03:41:05 localhost podman[86570]: 2025-12-05 08:41:05.241640665 +0000 UTC m=+0.126454567 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=nova_compute) Dec 5 03:41:05 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:41:07 localhost podman[86598]: 2025-12-05 08:41:07.202037141 +0000 UTC m=+0.086034159 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 5 03:41:07 localhost podman[86596]: 2025-12-05 08:41:07.248862025 +0000 UTC m=+0.133881945 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:41:07 localhost podman[86598]: 2025-12-05 08:41:07.255148728 +0000 UTC m=+0.139145786 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 5 03:41:07 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:41:07 localhost podman[86597]: 2025-12-05 08:41:07.32176535 +0000 UTC m=+0.205538261 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron) Dec 5 03:41:07 localhost podman[86596]: 2025-12-05 08:41:07.334516281 +0000 UTC m=+0.219536191 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:41:07 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:41:07 localhost podman[86597]: 2025-12-05 08:41:07.35470875 +0000 UTC m=+0.238481721 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true) Dec 5 03:41:07 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:41:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:41:08 localhost podman[86666]: 2025-12-05 08:41:08.182170865 +0000 UTC m=+0.073648108 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 5 03:41:08 localhost podman[86666]: 2025-12-05 08:41:08.541815851 +0000 UTC m=+0.433293124 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 5 03:41:08 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:41:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:41:13 localhost podman[86689]: 2025-12-05 08:41:13.209436523 +0000 UTC m=+0.087515363 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:41:13 localhost systemd[1]: tmp-crun.Nt2Dlu.mount: Deactivated successfully. Dec 5 03:41:13 localhost podman[86689]: 2025-12-05 08:41:13.262866811 +0000 UTC m=+0.140945651 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true) Dec 5 03:41:13 localhost podman[86691]: 2025-12-05 08:41:13.271137855 +0000 UTC m=+0.142966484 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, container_name=collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:41:13 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:41:13 localhost podman[86692]: 2025-12-05 08:41:13.314520694 +0000 UTC m=+0.182656340 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 5 03:41:13 localhost podman[86690]: 2025-12-05 08:41:13.377092563 +0000 UTC m=+0.251610364 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, distribution-scope=public, container_name=iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:41:13 localhost podman[86691]: 2025-12-05 08:41:13.38548693 +0000 UTC m=+0.257315549 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:41:13 localhost podman[86692]: 2025-12-05 08:41:13.392220446 +0000 UTC m=+0.260356112 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller) Dec 5 03:41:13 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:41:13 localhost podman[86690]: 2025-12-05 08:41:13.414769227 +0000 UTC m=+0.289286998 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:41:13 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:41:13 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:41:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:41:21 localhost systemd[1]: tmp-crun.m7klA8.mount: Deactivated successfully. Dec 5 03:41:21 localhost podman[86779]: 2025-12-05 08:41:21.186480293 +0000 UTC m=+0.075237407 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:41:21 localhost podman[86779]: 2025-12-05 08:41:21.3680759 +0000 UTC m=+0.256833034 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:41:21 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:41:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:41:36 localhost podman[86809]: 2025-12-05 08:41:36.190580872 +0000 UTC m=+0.076950089 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.expose-services=, container_name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 5 03:41:36 localhost podman[86809]: 2025-12-05 08:41:36.222685867 +0000 UTC m=+0.109055074 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public) Dec 5 03:41:36 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:41:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:41:38 localhost podman[86836]: 2025-12-05 08:41:38.180978996 +0000 UTC m=+0.068672576 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:41:38 localhost podman[86836]: 2025-12-05 08:41:38.204013392 +0000 UTC m=+0.091706942 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 5 03:41:38 localhost systemd[1]: tmp-crun.9unibe.mount: Deactivated successfully. Dec 5 03:41:38 localhost podman[86840]: 2025-12-05 08:41:38.241504332 +0000 UTC m=+0.121600829 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64) Dec 5 03:41:38 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:41:38 localhost podman[86840]: 2025-12-05 08:41:38.298761587 +0000 UTC m=+0.178858094 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Dec 5 03:41:38 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:41:38 localhost podman[86837]: 2025-12-05 08:41:38.341713383 +0000 UTC m=+0.223724139 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 5 03:41:38 localhost podman[86837]: 2025-12-05 08:41:38.348732929 +0000 UTC m=+0.230743705 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:41:38 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:41:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:41:39 localhost podman[86910]: 2025-12-05 08:41:39.180412273 +0000 UTC m=+0.068138920 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Dec 5 03:41:39 localhost podman[86910]: 2025-12-05 08:41:39.510546583 +0000 UTC m=+0.398273240 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:41:39 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:41:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:41:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:41:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:41:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:41:44 localhost systemd[1]: tmp-crun.pz9igR.mount: Deactivated successfully. Dec 5 03:41:44 localhost podman[87011]: 2025-12-05 08:41:44.22172463 +0000 UTC m=+0.102297137 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=) Dec 5 03:41:44 localhost systemd[1]: tmp-crun.lDwiX7.mount: Deactivated successfully. Dec 5 03:41:44 localhost podman[87011]: 2025-12-05 08:41:44.308427468 +0000 UTC m=+0.189000005 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 03:41:44 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:41:44 localhost podman[87013]: 2025-12-05 08:41:44.284078922 +0000 UTC m=+0.163930747 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12) Dec 5 03:41:44 localhost podman[87014]: 2025-12-05 08:41:44.310404339 +0000 UTC m=+0.182267079 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:41:44 localhost podman[87013]: 2025-12-05 08:41:44.366798877 +0000 UTC m=+0.246650702 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:41:44 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:41:44 localhost podman[87012]: 2025-12-05 08:41:44.423538846 +0000 UTC m=+0.303046890 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible) Dec 5 03:41:44 localhost podman[87012]: 2025-12-05 08:41:44.436755072 +0000 UTC m=+0.316263096 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:41:44 localhost podman[87014]: 2025-12-05 08:41:44.440493946 +0000 UTC m=+0.312356696 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:41:44 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:41:44 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:41:49 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:41:49 localhost recover_tripleo_nova_virtqemud[87139]: 61294 Dec 5 03:41:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:41:49 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:41:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:41:52 localhost systemd[1]: tmp-crun.p3NLen.mount: Deactivated successfully. Dec 5 03:41:52 localhost podman[87140]: 2025-12-05 08:41:52.227671745 +0000 UTC m=+0.104058091 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12) Dec 5 03:41:52 localhost podman[87140]: 2025-12-05 08:41:52.456957954 +0000 UTC m=+0.333344190 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Dec 5 03:41:52 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:42:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:42:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 285 writes, 1056 keys, 285 commit groups, 1.0 writes per commit group, ingest: 1.34 MB, 0.00 MB/s#012Interval WAL: 285 writes, 110 syncs, 2.59 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:42:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:42:07 localhost systemd[1]: tmp-crun.Xgpzgu.mount: Deactivated successfully. Dec 5 03:42:07 localhost podman[87170]: 2025-12-05 08:42:07.208606743 +0000 UTC m=+0.092720513 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step5, release=1761123044, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Dec 5 03:42:07 localhost podman[87170]: 2025-12-05 08:42:07.242943305 +0000 UTC m=+0.127057055 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:42:07 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:42:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:42:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 677 writes, 2740 keys, 677 commit groups, 1.0 writes per commit group, ingest: 3.43 MB, 0.01 MB/s#012Interval WAL: 677 writes, 238 syncs, 2.84 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:42:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:42:09 localhost systemd[1]: tmp-crun.jAEl2q.mount: Deactivated successfully. Dec 5 03:42:09 localhost podman[87198]: 2025-12-05 08:42:09.224406506 +0000 UTC m=+0.107591879 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public) Dec 5 03:42:09 localhost podman[87198]: 2025-12-05 08:42:09.258864723 +0000 UTC m=+0.142050116 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, container_name=logrotate_crond, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 5 03:42:09 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:42:09 localhost podman[87197]: 2025-12-05 08:42:09.287715547 +0000 UTC m=+0.172360705 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:42:09 localhost podman[87197]: 2025-12-05 08:42:09.318917403 +0000 UTC m=+0.203562541 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:42:09 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:42:09 localhost podman[87199]: 2025-12-05 08:42:09.361979473 +0000 UTC m=+0.240219074 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4) Dec 5 03:42:09 localhost podman[87199]: 2025-12-05 08:42:09.395615194 +0000 UTC m=+0.273854825 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:42:09 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:42:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:42:10 localhost podman[87269]: 2025-12-05 08:42:10.198449975 +0000 UTC m=+0.083655386 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute) Dec 5 03:42:10 localhost systemd[1]: tmp-crun.HxxScm.mount: Deactivated successfully. Dec 5 03:42:10 localhost podman[87269]: 2025-12-05 08:42:10.577430872 +0000 UTC m=+0.462636243 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:42:10 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:42:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:42:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:42:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:42:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:42:15 localhost podman[87295]: 2025-12-05 08:42:15.206979002 +0000 UTC m=+0.080771394 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 5 03:42:15 localhost systemd[1]: tmp-crun.3bPUzk.mount: Deactivated successfully. Dec 5 03:42:15 localhost podman[87292]: 2025-12-05 08:42:15.268195288 +0000 UTC m=+0.151768208 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 03:42:15 localhost podman[87293]: 2025-12-05 08:42:15.318836132 +0000 UTC m=+0.199973808 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:42:15 localhost podman[87293]: 2025-12-05 08:42:15.330509348 +0000 UTC m=+0.211647014 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 5 03:42:15 localhost podman[87292]: 2025-12-05 08:42:15.342830543 +0000 UTC m=+0.226403482 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Dec 5 03:42:15 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:42:15 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:42:15 localhost podman[87294]: 2025-12-05 08:42:15.423136672 +0000 UTC m=+0.298851913 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container) Dec 5 03:42:15 localhost podman[87294]: 2025-12-05 08:42:15.437560742 +0000 UTC m=+0.313276033 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:42:15 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:42:15 localhost podman[87295]: 2025-12-05 08:42:15.488913018 +0000 UTC m=+0.362705480 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1) Dec 5 03:42:15 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:42:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:42:23 localhost systemd[1]: tmp-crun.4XVh0B.mount: Deactivated successfully. Dec 5 03:42:23 localhost podman[87380]: 2025-12-05 08:42:23.212968224 +0000 UTC m=+0.097628688 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:42:23 localhost podman[87380]: 2025-12-05 08:42:23.412031863 +0000 UTC m=+0.296692307 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:42:23 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:42:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:42:38 localhost systemd[1]: tmp-crun.2Fp279.mount: Deactivated successfully. Dec 5 03:42:38 localhost podman[87408]: 2025-12-05 08:42:38.214339038 +0000 UTC m=+0.095216484 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step5, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:42:38 localhost podman[87408]: 2025-12-05 08:42:38.272364446 +0000 UTC m=+0.153241882 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:42:38 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:42:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:42:40 localhost podman[87443]: 2025-12-05 08:42:40.217832259 +0000 UTC m=+0.092880873 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public) Dec 5 03:42:40 localhost podman[87436]: 2025-12-05 08:42:40.187130303 +0000 UTC m=+0.075332088 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:42:40 localhost podman[87443]: 2025-12-05 08:42:40.24771316 +0000 UTC m=+0.122761774 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 03:42:40 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:42:40 localhost podman[87436]: 2025-12-05 08:42:40.267963987 +0000 UTC m=+0.156165862 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Dec 5 03:42:40 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:42:40 localhost podman[87437]: 2025-12-05 08:42:40.361308303 +0000 UTC m=+0.242099422 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 5 03:42:40 localhost podman[87437]: 2025-12-05 08:42:40.374588368 +0000 UTC m=+0.255379507 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, name=rhosp17/openstack-cron, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:42:40 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:42:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:42:41 localhost systemd[1]: tmp-crun.YmPBZv.mount: Deactivated successfully. Dec 5 03:42:41 localhost podman[87508]: 2025-12-05 08:42:41.186326246 +0000 UTC m=+0.075886564 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:42:41 localhost podman[87508]: 2025-12-05 08:42:41.586842457 +0000 UTC m=+0.476402815 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:42:41 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:42:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:42:46 localhost systemd[1]: tmp-crun.Pkw93V.mount: Deactivated successfully. Dec 5 03:42:46 localhost podman[87608]: 2025-12-05 08:42:46.28070569 +0000 UTC m=+0.161573618 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public) Dec 5 03:42:46 localhost podman[87610]: 2025-12-05 08:42:46.323904707 +0000 UTC m=+0.198963647 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:42:46 localhost podman[87607]: 2025-12-05 08:42:46.23215465 +0000 UTC m=+0.115238004 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, tcib_managed=true) Dec 5 03:42:46 localhost podman[87608]: 2025-12-05 08:42:46.368893609 +0000 UTC m=+0.249761517 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible) Dec 5 03:42:46 localhost podman[87610]: 2025-12-05 08:42:46.37778853 +0000 UTC m=+0.252847470 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4) Dec 5 03:42:46 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:42:46 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:42:46 localhost podman[87607]: 2025-12-05 08:42:46.418710177 +0000 UTC m=+0.301793481 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:42:46 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:42:46 localhost podman[87609]: 2025-12-05 08:42:46.477346845 +0000 UTC m=+0.354789007 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, vcs-type=git, container_name=collectd, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:42:46 localhost podman[87609]: 2025-12-05 08:42:46.51457221 +0000 UTC m=+0.392014372 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 03:42:46 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:42:47 localhost systemd[1]: tmp-crun.Kie1DQ.mount: Deactivated successfully. Dec 5 03:42:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:42:54 localhost systemd[1]: tmp-crun.o1q8ss.mount: Deactivated successfully. Dec 5 03:42:54 localhost podman[87739]: 2025-12-05 08:42:54.210826859 +0000 UTC m=+0.093294565 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:42:54 localhost podman[87739]: 2025-12-05 08:42:54.402221134 +0000 UTC m=+0.284688850 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:42:54 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:43:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:43:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:43:09 localhost recover_tripleo_nova_virtqemud[87774]: 61294 Dec 5 03:43:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:43:09 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:43:09 localhost systemd[1]: tmp-crun.6jwJmk.mount: Deactivated successfully. Dec 5 03:43:09 localhost podman[87767]: 2025-12-05 08:43:09.219742062 +0000 UTC m=+0.100604138 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute) Dec 5 03:43:09 localhost podman[87767]: 2025-12-05 08:43:09.255867113 +0000 UTC m=+0.136729189 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:43:09 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:43:11 localhost systemd[1]: tmp-crun.WCpN4o.mount: Deactivated successfully. Dec 5 03:43:11 localhost podman[87796]: 2025-12-05 08:43:11.220280794 +0000 UTC m=+0.099468454 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 03:43:11 localhost podman[87797]: 2025-12-05 08:43:11.275306171 +0000 UTC m=+0.148443127 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:43:11 localhost podman[87797]: 2025-12-05 08:43:11.285282115 +0000 UTC m=+0.158419021 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, version=17.1.12) Dec 5 03:43:11 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:43:11 localhost podman[87803]: 2025-12-05 08:43:11.248348249 +0000 UTC m=+0.114792990 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1761123044, version=17.1.12, build-date=2025-11-19T00:12:45Z, vcs-type=git) Dec 5 03:43:11 localhost podman[87803]: 2025-12-05 08:43:11.332699911 +0000 UTC m=+0.199144642 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:43:11 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:43:11 localhost podman[87796]: 2025-12-05 08:43:11.387591444 +0000 UTC m=+0.266779104 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 5 03:43:11 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:43:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:43:12 localhost systemd[1]: tmp-crun.EIHI3b.mount: Deactivated successfully. Dec 5 03:43:12 localhost podman[87868]: 2025-12-05 08:43:12.202218601 +0000 UTC m=+0.088487509 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:43:12 localhost podman[87868]: 2025-12-05 08:43:12.571624593 +0000 UTC m=+0.457893531 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 5 03:43:12 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:43:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:43:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:43:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:43:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:43:17 localhost systemd[1]: tmp-crun.LKbCvy.mount: Deactivated successfully. Dec 5 03:43:17 localhost podman[87894]: 2025-12-05 08:43:17.212855863 +0000 UTC m=+0.089116198 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:43:17 localhost podman[87894]: 2025-12-05 08:43:17.221389913 +0000 UTC m=+0.097650288 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:43:17 localhost systemd[1]: tmp-crun.hl7ZjZ.mount: Deactivated successfully. Dec 5 03:43:17 localhost podman[87893]: 2025-12-05 08:43:17.258471863 +0000 UTC m=+0.138537154 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, tcib_managed=true, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:43:17 localhost podman[87893]: 2025-12-05 08:43:17.297675489 +0000 UTC m=+0.177740770 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12) Dec 5 03:43:17 localhost podman[87895]: 2025-12-05 08:43:17.304272029 +0000 UTC m=+0.175155940 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:43:17 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:43:17 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:43:17 localhost podman[87895]: 2025-12-05 08:43:17.379580125 +0000 UTC m=+0.250463986 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:43:17 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:43:17 localhost podman[87892]: 2025-12-05 08:43:17.469405954 +0000 UTC m=+0.351563639 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=) Dec 5 03:43:17 localhost podman[87892]: 2025-12-05 08:43:17.515230982 +0000 UTC m=+0.397388687 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:43:17 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:43:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:43:25 localhost systemd[1]: tmp-crun.8MUdgH.mount: Deactivated successfully. Dec 5 03:43:25 localhost podman[87981]: 2025-12-05 08:43:25.208803128 +0000 UTC m=+0.095628627 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:43:25 localhost podman[87981]: 2025-12-05 08:43:25.4178016 +0000 UTC m=+0.304627069 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:43:25 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:43:40 localhost podman[88011]: 2025-12-05 08:43:40.201612282 +0000 UTC m=+0.086614862 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, container_name=nova_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com) Dec 5 03:43:40 localhost podman[88011]: 2025-12-05 08:43:40.236770714 +0000 UTC m=+0.121773274 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Dec 5 03:43:40 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:43:42 localhost systemd[1]: tmp-crun.wUpHLy.mount: Deactivated successfully. Dec 5 03:43:42 localhost systemd[1]: tmp-crun.mVnS1T.mount: Deactivated successfully. Dec 5 03:43:42 localhost podman[88038]: 2025-12-05 08:43:42.262915896 +0000 UTC m=+0.147707205 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:43:42 localhost podman[88037]: 2025-12-05 08:43:42.223125533 +0000 UTC m=+0.111813310 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z) Dec 5 03:43:42 localhost podman[88038]: 2025-12-05 08:43:42.299736198 +0000 UTC m=+0.184527467 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-type=git) Dec 5 03:43:42 localhost podman[88039]: 2025-12-05 08:43:42.309912799 +0000 UTC m=+0.189368535 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 5 03:43:42 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:43:42 localhost podman[88039]: 2025-12-05 08:43:42.345784992 +0000 UTC m=+0.225240748 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git) Dec 5 03:43:42 localhost podman[88037]: 2025-12-05 08:43:42.35620457 +0000 UTC m=+0.244892337 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 5 03:43:42 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:43:42 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:43:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:43:43 localhost podman[88110]: 2025-12-05 08:43:43.202019067 +0000 UTC m=+0.088698876 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:43:43 localhost podman[88110]: 2025-12-05 08:43:43.571949665 +0000 UTC m=+0.458629474 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:43:43 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:43:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:43:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:43:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:43:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:43:48 localhost systemd[1]: tmp-crun.aUCvQy.mount: Deactivated successfully. Dec 5 03:43:48 localhost podman[88285]: 2025-12-05 08:43:48.289610835 +0000 UTC m=+0.156496383 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:43:48 localhost podman[88282]: 2025-12-05 08:43:48.247547542 +0000 UTC m=+0.126453426 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, release=1761123044, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 5 03:43:48 localhost podman[88282]: 2025-12-05 08:43:48.333893665 +0000 UTC m=+0.212799519 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 5 03:43:48 localhost podman[88285]: 2025-12-05 08:43:48.345026885 +0000 UTC m=+0.211912463 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Dec 5 03:43:48 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:43:48 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:43:48 localhost podman[88283]: 2025-12-05 08:43:48.429362356 +0000 UTC m=+0.305377262 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 5 03:43:48 localhost podman[88283]: 2025-12-05 08:43:48.439981 +0000 UTC m=+0.315995916 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, release=1761123044, version=17.1.12, architecture=x86_64, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid) Dec 5 03:43:48 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:43:48 localhost podman[88284]: 2025-12-05 08:43:48.488369915 +0000 UTC m=+0.361401840 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, container_name=collectd, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd) Dec 5 03:43:48 localhost podman[88284]: 2025-12-05 08:43:48.505735574 +0000 UTC m=+0.378767519 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:43:48 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:43:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:43:56 localhost podman[88392]: 2025-12-05 08:43:56.208651956 +0000 UTC m=+0.092603484 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 5 03:43:56 localhost podman[88392]: 2025-12-05 08:43:56.409700256 +0000 UTC m=+0.293651764 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Dec 5 03:43:56 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:44:11 localhost podman[88421]: 2025-12-05 08:44:11.205691858 +0000 UTC m=+0.091003735 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 5 03:44:11 localhost podman[88421]: 2025-12-05 08:44:11.238650753 +0000 UTC m=+0.123962690 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 5 03:44:11 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:44:13 localhost systemd[1]: tmp-crun.8fXmRU.mount: Deactivated successfully. Dec 5 03:44:13 localhost podman[88449]: 2025-12-05 08:44:13.188300043 +0000 UTC m=+0.074166912 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:44:13 localhost podman[88447]: 2025-12-05 08:44:13.251961794 +0000 UTC m=+0.140795713 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:44:13 localhost podman[88447]: 2025-12-05 08:44:13.288623762 +0000 UTC m=+0.177457631 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:44:13 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:44:13 localhost podman[88448]: 2025-12-05 08:44:13.311999524 +0000 UTC m=+0.195844032 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:44:13 localhost podman[88449]: 2025-12-05 08:44:13.320757111 +0000 UTC m=+0.206624020 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Dec 5 03:44:13 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:44:13 localhost podman[88448]: 2025-12-05 08:44:13.37615749 +0000 UTC m=+0.260002008 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-type=git) Dec 5 03:44:13 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:44:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:44:14 localhost podman[88520]: 2025-12-05 08:44:14.189675213 +0000 UTC m=+0.076796173 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_migration_target, release=1761123044, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z) Dec 5 03:44:14 localhost podman[88520]: 2025-12-05 08:44:14.578688302 +0000 UTC m=+0.465809252 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=) Dec 5 03:44:14 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:44:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:44:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:44:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:44:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:44:19 localhost systemd[1]: tmp-crun.W6vuUy.mount: Deactivated successfully. Dec 5 03:44:19 localhost podman[88543]: 2025-12-05 08:44:19.214234386 +0000 UTC m=+0.090807469 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public) Dec 5 03:44:19 localhost podman[88546]: 2025-12-05 08:44:19.219074714 +0000 UTC m=+0.085022892 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:44:19 localhost podman[88544]: 2025-12-05 08:44:19.295992859 +0000 UTC m=+0.170239850 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public) Dec 5 03:44:19 localhost podman[88544]: 2025-12-05 08:44:19.307699217 +0000 UTC m=+0.181946228 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-11-18T23:44:13Z) Dec 5 03:44:19 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:44:19 localhost podman[88545]: 2025-12-05 08:44:19.354388569 +0000 UTC m=+0.228667361 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, tcib_managed=true, container_name=collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 5 03:44:19 localhost podman[88545]: 2025-12-05 08:44:19.362768205 +0000 UTC m=+0.237046977 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 5 03:44:19 localhost podman[88543]: 2025-12-05 08:44:19.372063639 +0000 UTC m=+0.248636752 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com) Dec 5 03:44:19 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:44:19 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:44:19 localhost podman[88546]: 2025-12-05 08:44:19.422697283 +0000 UTC m=+0.288645511 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, release=1761123044, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Dec 5 03:44:19 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:44:27 localhost podman[88629]: 2025-12-05 08:44:27.194756532 +0000 UTC m=+0.077093952 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc.) Dec 5 03:44:27 localhost podman[88629]: 2025-12-05 08:44:27.391631494 +0000 UTC m=+0.273968904 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Dec 5 03:44:27 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:44:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:44:29 localhost recover_tripleo_nova_virtqemud[88659]: 61294 Dec 5 03:44:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:44:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:44:42 localhost podman[88660]: 2025-12-05 08:44:42.206404619 +0000 UTC m=+0.088871340 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public) Dec 5 03:44:42 localhost podman[88660]: 2025-12-05 08:44:42.241760097 +0000 UTC m=+0.124226828 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:44:42 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:44:44 localhost systemd[1]: tmp-crun.zlP7C9.mount: Deactivated successfully. Dec 5 03:44:44 localhost podman[88688]: 2025-12-05 08:44:44.235942044 +0000 UTC m=+0.124200937 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, architecture=x86_64, name=rhosp17/openstack-cron, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:44:44 localhost podman[88687]: 2025-12-05 08:44:44.188373174 +0000 UTC m=+0.077017408 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, version=17.1.12) Dec 5 03:44:44 localhost podman[88688]: 2025-12-05 08:44:44.245068962 +0000 UTC m=+0.133327785 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:44:44 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:44:44 localhost podman[88689]: 2025-12-05 08:44:44.215153531 +0000 UTC m=+0.097534555 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 5 03:44:44 localhost podman[88687]: 2025-12-05 08:44:44.272637613 +0000 UTC m=+0.161281837 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team) Dec 5 03:44:44 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:44:44 localhost podman[88689]: 2025-12-05 08:44:44.295336725 +0000 UTC m=+0.177717719 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:44:44 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:44:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:44:45 localhost systemd[1]: tmp-crun.eBddt5.mount: Deactivated successfully. Dec 5 03:44:45 localhost podman[88757]: 2025-12-05 08:44:45.200098259 +0000 UTC m=+0.089889882 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z) Dec 5 03:44:45 localhost podman[88757]: 2025-12-05 08:44:45.575738281 +0000 UTC m=+0.465529874 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 5 03:44:45 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:44:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:44:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:44:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:44:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:44:50 localhost podman[88882]: 2025-12-05 08:44:50.212446932 +0000 UTC m=+0.098616688 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:44:50 localhost systemd[1]: tmp-crun.k9RVmP.mount: Deactivated successfully. Dec 5 03:44:50 localhost podman[88882]: 2025-12-05 08:44:50.275864156 +0000 UTC m=+0.162033902 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 03:44:50 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:44:50 localhost podman[88884]: 2025-12-05 08:44:50.278588759 +0000 UTC m=+0.157156623 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:44:50 localhost podman[88883]: 2025-12-05 08:44:50.364089466 +0000 UTC m=+0.246510907 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 5 03:44:50 localhost podman[88883]: 2025-12-05 08:44:50.402357012 +0000 UTC m=+0.284778483 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12) Dec 5 03:44:50 localhost podman[88884]: 2025-12-05 08:44:50.4127769 +0000 UTC m=+0.291344694 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12) Dec 5 03:44:50 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:44:50 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:44:50 localhost podman[88886]: 2025-12-05 08:44:50.469066106 +0000 UTC m=+0.341267376 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, container_name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Dec 5 03:44:50 localhost podman[88886]: 2025-12-05 08:44:50.498668459 +0000 UTC m=+0.370869789 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=ovn_controller, url=https://www.redhat.com, architecture=x86_64, vcs-type=git) Dec 5 03:44:50 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:44:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:44:58 localhost systemd[1]: tmp-crun.oSSUy7.mount: Deactivated successfully. Dec 5 03:44:58 localhost podman[88968]: 2025-12-05 08:44:58.213127844 +0000 UTC m=+0.094184641 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.expose-services=) Dec 5 03:44:58 localhost podman[88968]: 2025-12-05 08:44:58.455910337 +0000 UTC m=+0.336967134 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git) Dec 5 03:44:58 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:45:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:45:13 localhost systemd[1]: tmp-crun.silnCP.mount: Deactivated successfully. Dec 5 03:45:13 localhost podman[88997]: 2025-12-05 08:45:13.216238682 +0000 UTC m=+0.095184152 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=nova_compute, tcib_managed=true) Dec 5 03:45:13 localhost podman[88997]: 2025-12-05 08:45:13.244542885 +0000 UTC m=+0.123488405 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12) Dec 5 03:45:13 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:45:15 localhost systemd[1]: tmp-crun.AXcYx9.mount: Deactivated successfully. Dec 5 03:45:15 localhost podman[89025]: 2025-12-05 08:45:15.210628456 +0000 UTC m=+0.093721369 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, batch=17.1_20251118.1, container_name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Dec 5 03:45:15 localhost podman[89025]: 2025-12-05 08:45:15.222650592 +0000 UTC m=+0.105743525 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc.) Dec 5 03:45:15 localhost podman[89024]: 2025-12-05 08:45:15.179409964 +0000 UTC m=+0.070691956 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team) Dec 5 03:45:15 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:45:15 localhost podman[89024]: 2025-12-05 08:45:15.266708036 +0000 UTC m=+0.157989958 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64) Dec 5 03:45:15 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:45:15 localhost podman[89026]: 2025-12-05 08:45:15.321585499 +0000 UTC m=+0.203991830 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi) Dec 5 03:45:15 localhost podman[89026]: 2025-12-05 08:45:15.347735776 +0000 UTC m=+0.230142127 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 03:45:15 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:45:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:45:16 localhost podman[89094]: 2025-12-05 08:45:16.205043374 +0000 UTC m=+0.092026527 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12) Dec 5 03:45:16 localhost podman[89094]: 2025-12-05 08:45:16.612773084 +0000 UTC m=+0.499756277 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z) Dec 5 03:45:16 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:45:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:45:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:45:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:45:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:45:21 localhost podman[89118]: 2025-12-05 08:45:21.207681752 +0000 UTC m=+0.093322457 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Dec 5 03:45:21 localhost podman[89118]: 2025-12-05 08:45:21.217671626 +0000 UTC m=+0.103312291 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:45:21 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:45:21 localhost podman[89116]: 2025-12-05 08:45:21.30537119 +0000 UTC m=+0.195348817 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 5 03:45:21 localhost podman[89116]: 2025-12-05 08:45:21.347933658 +0000 UTC m=+0.237911255 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vcs-type=git) Dec 5 03:45:21 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:45:21 localhost podman[89117]: 2025-12-05 08:45:21.367747822 +0000 UTC m=+0.255160540 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12) Dec 5 03:45:21 localhost podman[89117]: 2025-12-05 08:45:21.376691994 +0000 UTC m=+0.264104702 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, url=https://www.redhat.com, io.openshift.expose-services=) Dec 5 03:45:21 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:45:21 localhost podman[89124]: 2025-12-05 08:45:21.47265495 +0000 UTC m=+0.348301480 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:45:21 localhost podman[89124]: 2025-12-05 08:45:21.523691385 +0000 UTC m=+0.399337935 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Dec 5 03:45:21 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:45:29 localhost podman[89203]: 2025-12-05 08:45:29.205427881 +0000 UTC m=+0.088346965 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z) Dec 5 03:45:29 localhost podman[89203]: 2025-12-05 08:45:29.403160319 +0000 UTC m=+0.286079413 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr) Dec 5 03:45:29 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:45:33 localhost sshd[89232]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:45:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:45:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:45:44 localhost recover_tripleo_nova_virtqemud[89241]: 61294 Dec 5 03:45:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:45:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:45:44 localhost podman[89234]: 2025-12-05 08:45:44.184535597 +0000 UTC m=+0.074872074 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 5 03:45:44 localhost podman[89234]: 2025-12-05 08:45:44.216938284 +0000 UTC m=+0.107274761 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1761123044, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container) Dec 5 03:45:44 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:45:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:45:46 localhost systemd[1]: tmp-crun.6b7vOZ.mount: Deactivated successfully. Dec 5 03:45:46 localhost podman[89262]: 2025-12-05 08:45:46.25863065 +0000 UTC m=+0.135171392 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:45:46 localhost podman[89260]: 2025-12-05 08:45:46.217326781 +0000 UTC m=+0.100343921 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Dec 5 03:45:46 localhost podman[89260]: 2025-12-05 08:45:46.302623522 +0000 UTC m=+0.185640662 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Dec 5 03:45:46 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:45:46 localhost podman[89261]: 2025-12-05 08:45:46.315628698 +0000 UTC m=+0.195952956 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-cron-container, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:45:46 localhost podman[89261]: 2025-12-05 08:45:46.327703626 +0000 UTC m=+0.208027864 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond) Dec 5 03:45:46 localhost podman[89262]: 2025-12-05 08:45:46.340108054 +0000 UTC m=+0.216648756 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:45:46 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:45:46 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:45:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:45:47 localhost podman[89332]: 2025-12-05 08:45:47.200352382 +0000 UTC m=+0.085743515 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 5 03:45:47 localhost podman[89332]: 2025-12-05 08:45:47.571680043 +0000 UTC m=+0.457071166 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64) Dec 5 03:45:47 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:45:49 localhost systemd[1]: tmp-crun.hdmfVe.mount: Deactivated successfully. Dec 5 03:45:49 localhost podman[89458]: 2025-12-05 08:45:49.454294138 +0000 UTC m=+0.088941142 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 03:45:49 localhost podman[89458]: 2025-12-05 08:45:49.571636136 +0000 UTC m=+0.206283080 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True) Dec 5 03:45:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:45:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:45:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:45:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:45:52 localhost systemd[1]: tmp-crun.Gjk2jj.mount: Deactivated successfully. Dec 5 03:45:52 localhost podman[89602]: 2025-12-05 08:45:52.24463706 +0000 UTC m=+0.118775893 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 03:45:52 localhost podman[89603]: 2025-12-05 08:45:52.295387067 +0000 UTC m=+0.171108838 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64) Dec 5 03:45:52 localhost podman[89605]: 2025-12-05 08:45:52.208447876 +0000 UTC m=+0.085268151 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, architecture=x86_64, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:45:52 localhost podman[89604]: 2025-12-05 08:45:52.27450798 +0000 UTC m=+0.150945812 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, container_name=collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 5 03:45:52 localhost podman[89605]: 2025-12-05 08:45:52.344909217 +0000 UTC m=+0.221729542 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:45:52 localhost podman[89604]: 2025-12-05 08:45:52.355753907 +0000 UTC m=+0.232191759 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, version=17.1.12) Dec 5 03:45:52 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:45:52 localhost podman[89603]: 2025-12-05 08:45:52.363103211 +0000 UTC m=+0.238824932 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 5 03:45:52 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:45:52 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:45:52 localhost podman[89602]: 2025-12-05 08:45:52.430711002 +0000 UTC m=+0.304849815 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:45:52 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:46:00 localhost podman[89710]: 2025-12-05 08:46:00.2003984 +0000 UTC m=+0.083691363 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:46:00 localhost podman[89710]: 2025-12-05 08:46:00.402609265 +0000 UTC m=+0.285902238 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 5 03:46:00 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:46:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:46:15 localhost systemd[1]: tmp-crun.uSfl29.mount: Deactivated successfully. Dec 5 03:46:15 localhost podman[89738]: 2025-12-05 08:46:15.21610586 +0000 UTC m=+0.098018390 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_compute, architecture=x86_64, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:46:15 localhost podman[89738]: 2025-12-05 08:46:15.2469439 +0000 UTC m=+0.128856440 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, tcib_managed=true) Dec 5 03:46:15 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:46:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:46:17 localhost podman[89763]: 2025-12-05 08:46:17.19986814 +0000 UTC m=+0.085632002 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:46:17 localhost podman[89763]: 2025-12-05 08:46:17.231559296 +0000 UTC m=+0.117323158 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:46:17 localhost systemd[1]: tmp-crun.8mgGUI.mount: Deactivated successfully. Dec 5 03:46:17 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:46:17 localhost podman[89764]: 2025-12-05 08:46:17.263768288 +0000 UTC m=+0.147618941 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:46:17 localhost podman[89764]: 2025-12-05 08:46:17.274693352 +0000 UTC m=+0.158544035 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044) Dec 5 03:46:17 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:46:17 localhost podman[89765]: 2025-12-05 08:46:17.361541089 +0000 UTC m=+0.241363420 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi) Dec 5 03:46:17 localhost podman[89765]: 2025-12-05 08:46:17.418770534 +0000 UTC m=+0.298592825 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, release=1761123044) Dec 5 03:46:17 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:46:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:46:18 localhost systemd[1]: tmp-crun.XUfFdZ.mount: Deactivated successfully. Dec 5 03:46:18 localhost podman[89833]: 2025-12-05 08:46:18.198365801 +0000 UTC m=+0.085521018 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:46:18 localhost podman[89833]: 2025-12-05 08:46:18.568769884 +0000 UTC m=+0.455925071 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target) Dec 5 03:46:18 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:46:23 localhost systemd[1]: tmp-crun.oXJHo4.mount: Deactivated successfully. Dec 5 03:46:23 localhost podman[89861]: 2025-12-05 08:46:23.205489937 +0000 UTC m=+0.075002858 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, architecture=x86_64) Dec 5 03:46:23 localhost podman[89861]: 2025-12-05 08:46:23.251360565 +0000 UTC m=+0.120873486 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com) Dec 5 03:46:23 localhost podman[89859]: 2025-12-05 08:46:23.262851785 +0000 UTC m=+0.139008699 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, version=17.1.12, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container) Dec 5 03:46:23 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:46:23 localhost podman[89859]: 2025-12-05 08:46:23.277357277 +0000 UTC m=+0.153514241 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12) Dec 5 03:46:23 localhost podman[89858]: 2025-12-05 08:46:23.30924165 +0000 UTC m=+0.188001943 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1) Dec 5 03:46:23 localhost podman[89860]: 2025-12-05 08:46:23.28760879 +0000 UTC m=+0.158852294 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 5 03:46:23 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:46:23 localhost podman[89858]: 2025-12-05 08:46:23.363615427 +0000 UTC m=+0.242375720 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc.) Dec 5 03:46:23 localhost podman[89860]: 2025-12-05 08:46:23.374076226 +0000 UTC m=+0.245319710 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true) Dec 5 03:46:23 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:46:23 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:46:24 localhost systemd[1]: tmp-crun.124D74.mount: Deactivated successfully. Dec 5 03:46:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:46:31 localhost podman[89943]: 2025-12-05 08:46:31.209962602 +0000 UTC m=+0.086292482 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true) Dec 5 03:46:31 localhost podman[89943]: 2025-12-05 08:46:31.414728684 +0000 UTC m=+0.291058544 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, architecture=x86_64) Dec 5 03:46:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:46:46 localhost systemd[1]: tmp-crun.prVOzR.mount: Deactivated successfully. Dec 5 03:46:46 localhost podman[89971]: 2025-12-05 08:46:46.21047012 +0000 UTC m=+0.096578926 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, architecture=x86_64, container_name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12) Dec 5 03:46:46 localhost podman[89971]: 2025-12-05 08:46:46.241274469 +0000 UTC m=+0.127383315 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step5, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:46:46 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:46:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:46:48 localhost systemd[1]: tmp-crun.Xz7QKZ.mount: Deactivated successfully. Dec 5 03:46:48 localhost podman[89999]: 2025-12-05 08:46:48.212337541 +0000 UTC m=+0.091759938 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z) Dec 5 03:46:48 localhost podman[89999]: 2025-12-05 08:46:48.246614147 +0000 UTC m=+0.126036534 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=) Dec 5 03:46:48 localhost podman[89997]: 2025-12-05 08:46:48.273616869 +0000 UTC m=+0.157268175 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 5 03:46:48 localhost podman[89998]: 2025-12-05 08:46:48.321363535 +0000 UTC m=+0.201950248 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, version=17.1.12, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vendor=Red Hat, Inc.) Dec 5 03:46:48 localhost podman[89998]: 2025-12-05 08:46:48.33267594 +0000 UTC m=+0.213262653 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:46:48 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:46:48 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:46:48 localhost podman[89997]: 2025-12-05 08:46:48.386318755 +0000 UTC m=+0.269970001 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-ceilometer-compute) Dec 5 03:46:48 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:46:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:46:49 localhost podman[90068]: 2025-12-05 08:46:49.187491441 +0000 UTC m=+0.072508942 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true) Dec 5 03:46:49 localhost podman[90068]: 2025-12-05 08:46:49.581557595 +0000 UTC m=+0.466575096 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, release=1761123044, vcs-type=git, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:46:49 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:46:54 localhost podman[90168]: 2025-12-05 08:46:54.22495435 +0000 UTC m=+0.107267452 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=iscsid, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com) Dec 5 03:46:54 localhost podman[90168]: 2025-12-05 08:46:54.263521085 +0000 UTC m=+0.145834187 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, container_name=iscsid, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Dec 5 03:46:54 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:46:54 localhost podman[90169]: 2025-12-05 08:46:54.264721273 +0000 UTC m=+0.144574320 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, container_name=collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:46:54 localhost podman[90167]: 2025-12-05 08:46:54.322463762 +0000 UTC m=+0.204702191 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 5 03:46:54 localhost podman[90167]: 2025-12-05 08:46:54.370939091 +0000 UTC m=+0.253177500 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:46:54 localhost podman[90170]: 2025-12-05 08:46:54.386935678 +0000 UTC m=+0.260753371 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:46:54 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:46:54 localhost podman[90169]: 2025-12-05 08:46:54.406468813 +0000 UTC m=+0.286321850 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git) Dec 5 03:46:54 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:46:54 localhost podman[90170]: 2025-12-05 08:46:54.462517822 +0000 UTC m=+0.336335515 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, url=https://www.redhat.com) Dec 5 03:46:54 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:47:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:47:02 localhost podman[90249]: 2025-12-05 08:47:02.201827645 +0000 UTC m=+0.087425786 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:47:02 localhost podman[90249]: 2025-12-05 08:47:02.408675362 +0000 UTC m=+0.294273513 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, release=1761123044, container_name=metrics_qdr) Dec 5 03:47:02 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:47:17 localhost systemd[1]: tmp-crun.GQouSb.mount: Deactivated successfully. Dec 5 03:47:17 localhost podman[90277]: 2025-12-05 08:47:17.21574852 +0000 UTC m=+0.094988256 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 5 03:47:17 localhost podman[90277]: 2025-12-05 08:47:17.270610333 +0000 UTC m=+0.149850039 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:47:17 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:47:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:47:19 localhost podman[90306]: 2025-12-05 08:47:19.219196461 +0000 UTC m=+0.095224454 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 03:47:19 localhost systemd[1]: tmp-crun.DBeUV5.mount: Deactivated successfully. Dec 5 03:47:19 localhost podman[90304]: 2025-12-05 08:47:19.260450568 +0000 UTC m=+0.145118315 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 03:47:19 localhost podman[90306]: 2025-12-05 08:47:19.286703629 +0000 UTC m=+0.162731632 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc.) Dec 5 03:47:19 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:47:19 localhost podman[90305]: 2025-12-05 08:47:19.314679572 +0000 UTC m=+0.195200863 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:47:19 localhost podman[90304]: 2025-12-05 08:47:19.323796629 +0000 UTC m=+0.208464416 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:47:19 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:47:19 localhost podman[90305]: 2025-12-05 08:47:19.378561099 +0000 UTC m=+0.259082450 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:47:19 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:47:20 localhost podman[90377]: 2025-12-05 08:47:20.195611629 +0000 UTC m=+0.083635231 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:47:20 localhost podman[90377]: 2025-12-05 08:47:20.587864487 +0000 UTC m=+0.475888079 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:47:20 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:47:20 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:47:20 localhost recover_tripleo_nova_virtqemud[90402]: 61294 Dec 5 03:47:20 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:47:20 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:47:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:47:25 localhost systemd[1]: tmp-crun.TE0Iet.mount: Deactivated successfully. Dec 5 03:47:25 localhost podman[90404]: 2025-12-05 08:47:25.269908522 +0000 UTC m=+0.150316564 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:47:25 localhost podman[90404]: 2025-12-05 08:47:25.307627062 +0000 UTC m=+0.188035084 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 5 03:47:25 localhost podman[90403]: 2025-12-05 08:47:25.315197503 +0000 UTC m=+0.196055138 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 5 03:47:25 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:47:25 localhost podman[90403]: 2025-12-05 08:47:25.361209105 +0000 UTC m=+0.242066730 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=) Dec 5 03:47:25 localhost podman[90406]: 2025-12-05 08:47:25.371407447 +0000 UTC m=+0.245428244 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container) Dec 5 03:47:25 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:47:25 localhost podman[90405]: 2025-12-05 08:47:25.233709639 +0000 UTC m=+0.109295663 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044) Dec 5 03:47:25 localhost podman[90405]: 2025-12-05 08:47:25.419619716 +0000 UTC m=+0.295205730 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 5 03:47:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:47:25 localhost podman[90406]: 2025-12-05 08:47:25.476226642 +0000 UTC m=+0.350247469 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=) Dec 5 03:47:25 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:47:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:47:33 localhost podman[90485]: 2025-12-05 08:47:33.202102425 +0000 UTC m=+0.084880719 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:47:33 localhost podman[90485]: 2025-12-05 08:47:33.792768383 +0000 UTC m=+0.675546707 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:47:33 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:47:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:47:48 localhost podman[90515]: 2025-12-05 08:47:48.206710906 +0000 UTC m=+0.089123819 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step5) Dec 5 03:47:48 localhost podman[90515]: 2025-12-05 08:47:48.243844377 +0000 UTC m=+0.126257330 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, release=1761123044) Dec 5 03:47:48 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:47:50 localhost podman[90542]: 2025-12-05 08:47:50.215632392 +0000 UTC m=+0.095967417 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 03:47:50 localhost systemd[1]: tmp-crun.PdJCns.mount: Deactivated successfully. Dec 5 03:47:50 localhost podman[90543]: 2025-12-05 08:47:50.270921848 +0000 UTC m=+0.144749695 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044, tcib_managed=true, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:47:50 localhost podman[90543]: 2025-12-05 08:47:50.305183532 +0000 UTC m=+0.179011309 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.expose-services=) Dec 5 03:47:50 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:47:50 localhost podman[90545]: 2025-12-05 08:47:50.318921701 +0000 UTC m=+0.187414945 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:47:50 localhost podman[90542]: 2025-12-05 08:47:50.322069947 +0000 UTC m=+0.202405022 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:47:50 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:47:50 localhost podman[90545]: 2025-12-05 08:47:50.346568114 +0000 UTC m=+0.215061348 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:47:50 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:47:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:47:51 localhost podman[90616]: 2025-12-05 08:47:51.194660051 +0000 UTC m=+0.080809895 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4) Dec 5 03:47:51 localhost podman[90616]: 2025-12-05 08:47:51.562702121 +0000 UTC m=+0.448852085 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 5 03:47:51 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:47:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:47:56 localhost systemd[1]: tmp-crun.v8No1K.mount: Deactivated successfully. Dec 5 03:47:56 localhost podman[90718]: 2025-12-05 08:47:56.225348014 +0000 UTC m=+0.103850468 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, tcib_managed=true) Dec 5 03:47:56 localhost podman[90717]: 2025-12-05 08:47:56.262548978 +0000 UTC m=+0.144955781 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible) Dec 5 03:47:56 localhost podman[90717]: 2025-12-05 08:47:56.29837118 +0000 UTC m=+0.180777983 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4) Dec 5 03:47:56 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:47:56 localhost podman[90719]: 2025-12-05 08:47:56.314728688 +0000 UTC m=+0.192393686 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4) Dec 5 03:47:56 localhost podman[90718]: 2025-12-05 08:47:56.35578834 +0000 UTC m=+0.234290834 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, release=1761123044) Dec 5 03:47:56 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:47:56 localhost podman[90716]: 2025-12-05 08:47:56.37251762 +0000 UTC m=+0.253709905 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z) Dec 5 03:47:56 localhost podman[90719]: 2025-12-05 08:47:56.391434337 +0000 UTC m=+0.269099405 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Dec 5 03:47:56 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:47:56 localhost podman[90716]: 2025-12-05 08:47:56.419533124 +0000 UTC m=+0.300725419 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 5 03:47:56 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:47:57 localhost systemd[1]: tmp-crun.iymagn.mount: Deactivated successfully. Dec 5 03:48:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:48:04 localhost podman[90800]: 2025-12-05 08:48:04.201529558 +0000 UTC m=+0.085576341 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1) Dec 5 03:48:04 localhost podman[90800]: 2025-12-05 08:48:04.399776202 +0000 UTC m=+0.283822995 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:48:04 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:48:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:48:19 localhost systemd[1]: tmp-crun.PP2B9q.mount: Deactivated successfully. Dec 5 03:48:19 localhost podman[90830]: 2025-12-05 08:48:19.207237836 +0000 UTC m=+0.093291335 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:48:19 localhost podman[90830]: 2025-12-05 08:48:19.24508419 +0000 UTC m=+0.131137639 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:48:19 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:48:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:48:21 localhost podman[90857]: 2025-12-05 08:48:21.199179506 +0000 UTC m=+0.082713372 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4) Dec 5 03:48:21 localhost podman[90857]: 2025-12-05 08:48:21.212693088 +0000 UTC m=+0.096226914 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, version=17.1.12, com.redhat.component=openstack-cron-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z) Dec 5 03:48:21 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:48:21 localhost systemd[1]: tmp-crun.CiKgQl.mount: Deactivated successfully. Dec 5 03:48:21 localhost podman[90856]: 2025-12-05 08:48:21.314925355 +0000 UTC m=+0.201996639 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:48:21 localhost podman[90858]: 2025-12-05 08:48:21.409389325 +0000 UTC m=+0.291294132 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 5 03:48:21 localhost podman[90856]: 2025-12-05 08:48:21.427831887 +0000 UTC m=+0.314903111 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:11:48Z) Dec 5 03:48:21 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:48:21 localhost podman[90858]: 2025-12-05 08:48:21.467806806 +0000 UTC m=+0.349711613 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12) Dec 5 03:48:21 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:48:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:48:22 localhost podman[90931]: 2025-12-05 08:48:22.207028703 +0000 UTC m=+0.094464462 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 5 03:48:22 localhost podman[90931]: 2025-12-05 08:48:22.575782275 +0000 UTC m=+0.463218054 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 5 03:48:22 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:48:27 localhost podman[90956]: 2025-12-05 08:48:27.217383696 +0000 UTC m=+0.095266296 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-collectd, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 5 03:48:27 localhost podman[90955]: 2025-12-05 08:48:27.260107598 +0000 UTC m=+0.141385592 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, release=1761123044, version=17.1.12, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Dec 5 03:48:27 localhost podman[90955]: 2025-12-05 08:48:27.268873505 +0000 UTC m=+0.150151499 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:48:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:48:27 localhost systemd[1]: tmp-crun.A6dm0Y.mount: Deactivated successfully. Dec 5 03:48:27 localhost podman[90961]: 2025-12-05 08:48:27.318693644 +0000 UTC m=+0.193332055 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 5 03:48:27 localhost podman[90956]: 2025-12-05 08:48:27.33169143 +0000 UTC m=+0.209574070 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:48:27 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:48:27 localhost podman[90961]: 2025-12-05 08:48:27.350743011 +0000 UTC m=+0.225381482 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:48:27 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:48:27 localhost podman[90954]: 2025-12-05 08:48:27.421007853 +0000 UTC m=+0.306352970 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 5 03:48:27 localhost podman[90954]: 2025-12-05 08:48:27.49171121 +0000 UTC m=+0.377056287 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 5 03:48:27 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:48:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:48:35 localhost systemd[1]: tmp-crun.l403pA.mount: Deactivated successfully. Dec 5 03:48:35 localhost podman[91042]: 2025-12-05 08:48:35.226944345 +0000 UTC m=+0.103737824 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:48:35 localhost podman[91042]: 2025-12-05 08:48:35.426719495 +0000 UTC m=+0.303513004 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1) Dec 5 03:48:35 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:48:49 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:48:49 localhost recover_tripleo_nova_virtqemud[91072]: 61294 Dec 5 03:48:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:48:49 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:48:50 localhost podman[91073]: 2025-12-05 08:48:50.20022733 +0000 UTC m=+0.088742607 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, version=17.1.12, config_id=tripleo_step5, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, distribution-scope=public) Dec 5 03:48:50 localhost podman[91073]: 2025-12-05 08:48:50.228617095 +0000 UTC m=+0.117132432 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:48:50 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:48:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:48:52 localhost systemd[1]: tmp-crun.HnwJUf.mount: Deactivated successfully. Dec 5 03:48:52 localhost podman[91102]: 2025-12-05 08:48:52.22009504 +0000 UTC m=+0.097886766 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:48:52 localhost podman[91101]: 2025-12-05 08:48:52.262500902 +0000 UTC m=+0.143215247 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044) Dec 5 03:48:52 localhost podman[91102]: 2025-12-05 08:48:52.26931731 +0000 UTC m=+0.147108946 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, build-date=2025-11-19T00:12:45Z) Dec 5 03:48:52 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:48:52 localhost podman[91100]: 2025-12-05 08:48:52.310009711 +0000 UTC m=+0.190957023 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true) Dec 5 03:48:52 localhost podman[91101]: 2025-12-05 08:48:52.319975064 +0000 UTC m=+0.200689459 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, container_name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 5 03:48:52 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:48:52 localhost podman[91100]: 2025-12-05 08:48:52.341878742 +0000 UTC m=+0.222826104 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Dec 5 03:48:52 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:48:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:48:53 localhost podman[91171]: 2025-12-05 08:48:53.211397522 +0000 UTC m=+0.091220082 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 5 03:48:53 localhost podman[91171]: 2025-12-05 08:48:53.615722929 +0000 UTC m=+0.495545449 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 5 03:48:53 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:48:58 localhost podman[91272]: 2025-12-05 08:48:58.223347584 +0000 UTC m=+0.105600131 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 03:48:58 localhost systemd[1]: tmp-crun.FUeHHD.mount: Deactivated successfully. Dec 5 03:48:58 localhost podman[91273]: 2025-12-05 08:48:58.324992653 +0000 UTC m=+0.201350600 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public) Dec 5 03:48:58 localhost podman[91278]: 2025-12-05 08:48:58.360713412 +0000 UTC m=+0.230565131 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 5 03:48:58 localhost podman[91272]: 2025-12-05 08:48:58.386600691 +0000 UTC m=+0.268853288 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:48:58 localhost podman[91273]: 2025-12-05 08:48:58.387182239 +0000 UTC m=+0.263540226 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:48:58 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:48:58 localhost podman[91278]: 2025-12-05 08:48:58.439772552 +0000 UTC m=+0.309624271 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4) Dec 5 03:48:58 localhost podman[91274]: 2025-12-05 08:48:58.291171652 +0000 UTC m=+0.165243709 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-collectd-container, container_name=collectd, version=17.1.12, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 5 03:48:58 localhost podman[91274]: 2025-12-05 08:48:58.471427127 +0000 UTC m=+0.345499184 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 5 03:48:58 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:48:58 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:48:58 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:49:06 localhost podman[91354]: 2025-12-05 08:49:06.204332425 +0000 UTC m=+0.089497719 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, container_name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 5 03:49:06 localhost podman[91354]: 2025-12-05 08:49:06.401683832 +0000 UTC m=+0.286849106 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, release=1761123044, container_name=metrics_qdr, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Dec 5 03:49:06 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:49:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:49:21 localhost systemd[1]: tmp-crun.7CyYZT.mount: Deactivated successfully. Dec 5 03:49:21 localhost podman[91383]: 2025-12-05 08:49:21.218300185 +0000 UTC m=+0.102216577 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.12) Dec 5 03:49:21 localhost podman[91383]: 2025-12-05 08:49:21.254789548 +0000 UTC m=+0.138705940 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, url=https://www.redhat.com, container_name=nova_compute, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 5 03:49:21 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:49:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:49:23 localhost systemd[1]: tmp-crun.VwwVRY.mount: Deactivated successfully. Dec 5 03:49:23 localhost podman[91410]: 2025-12-05 08:49:23.213293887 +0000 UTC m=+0.101131014 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:49:23 localhost podman[91410]: 2025-12-05 08:49:23.245594393 +0000 UTC m=+0.133431530 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 03:49:23 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:49:23 localhost podman[91411]: 2025-12-05 08:49:23.266369686 +0000 UTC m=+0.147996764 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Dec 5 03:49:23 localhost podman[91411]: 2025-12-05 08:49:23.27274823 +0000 UTC m=+0.154375348 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Dec 5 03:49:23 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:49:23 localhost podman[91412]: 2025-12-05 08:49:23.227108399 +0000 UTC m=+0.106684254 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 03:49:23 localhost podman[91412]: 2025-12-05 08:49:23.358814674 +0000 UTC m=+0.238390529 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, tcib_managed=true, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc.) Dec 5 03:49:23 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:49:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:49:24 localhost podman[91484]: 2025-12-05 08:49:24.204672412 +0000 UTC m=+0.090280173 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 5 03:49:24 localhost podman[91484]: 2025-12-05 08:49:24.578113137 +0000 UTC m=+0.463720838 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, release=1761123044, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z) Dec 5 03:49:24 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:49:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:49:29 localhost systemd[1]: tmp-crun.GEUsnO.mount: Deactivated successfully. Dec 5 03:49:29 localhost systemd[1]: tmp-crun.jf7VYr.mount: Deactivated successfully. Dec 5 03:49:29 localhost podman[91507]: 2025-12-05 08:49:29.199485011 +0000 UTC m=+0.079056001 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=) Dec 5 03:49:29 localhost podman[91509]: 2025-12-05 08:49:29.262775841 +0000 UTC m=+0.137041969 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:49:29 localhost podman[91509]: 2025-12-05 08:49:29.276612463 +0000 UTC m=+0.150878591 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:49:29 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:49:29 localhost podman[91511]: 2025-12-05 08:49:29.24701568 +0000 UTC m=+0.115315486 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:49:29 localhost podman[91511]: 2025-12-05 08:49:29.330776124 +0000 UTC m=+0.199075900 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:49:29 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:49:29 localhost podman[91508]: 2025-12-05 08:49:29.413090973 +0000 UTC m=+0.288416894 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container) Dec 5 03:49:29 localhost podman[91507]: 2025-12-05 08:49:29.435999151 +0000 UTC m=+0.315570191 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 5 03:49:29 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:49:29 localhost podman[91508]: 2025-12-05 08:49:29.452877647 +0000 UTC m=+0.328203568 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3) Dec 5 03:49:29 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:49:37 localhost podman[91590]: 2025-12-05 08:49:37.197365685 +0000 UTC m=+0.085649763 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:49:37 localhost podman[91590]: 2025-12-05 08:49:37.397370562 +0000 UTC m=+0.285654630 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, batch=17.1_20251118.1) Dec 5 03:49:37 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:49:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:49:52 localhost podman[91619]: 2025-12-05 08:49:52.203318108 +0000 UTC m=+0.090161670 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044) Dec 5 03:49:52 localhost podman[91619]: 2025-12-05 08:49:52.236726067 +0000 UTC m=+0.123569639 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, release=1761123044, version=17.1.12) Dec 5 03:49:52 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:49:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:49:54 localhost systemd[1]: tmp-crun.XVMfn6.mount: Deactivated successfully. Dec 5 03:49:54 localhost podman[91646]: 2025-12-05 08:49:54.214919877 +0000 UTC m=+0.100739163 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:49:54 localhost systemd[1]: tmp-crun.GQnEBV.mount: Deactivated successfully. Dec 5 03:49:54 localhost podman[91647]: 2025-12-05 08:49:54.263359023 +0000 UTC m=+0.143318150 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:49:54 localhost podman[91647]: 2025-12-05 08:49:54.297965978 +0000 UTC m=+0.177925155 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, version=17.1.12, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4) Dec 5 03:49:54 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:49:54 localhost podman[91646]: 2025-12-05 08:49:54.324723404 +0000 UTC m=+0.210542750 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 5 03:49:54 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:49:54 localhost podman[91648]: 2025-12-05 08:49:54.303338831 +0000 UTC m=+0.183520735 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 5 03:49:54 localhost podman[91648]: 2025-12-05 08:49:54.383053282 +0000 UTC m=+0.263235186 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, release=1761123044, distribution-scope=public) Dec 5 03:49:54 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:49:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:49:55 localhost systemd[1]: tmp-crun.RXFXPy.mount: Deactivated successfully. Dec 5 03:49:55 localhost podman[91716]: 2025-12-05 08:49:55.218014518 +0000 UTC m=+0.103883848 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 5 03:49:55 localhost podman[91716]: 2025-12-05 08:49:55.594027722 +0000 UTC m=+0.479897092 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:49:55 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:50:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:50:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:50:00 localhost recover_tripleo_nova_virtqemud[91842]: 61294 Dec 5 03:50:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:50:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:50:00 localhost podman[91817]: 2025-12-05 08:50:00.20337341 +0000 UTC m=+0.084074485 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:50:00 localhost systemd[1]: tmp-crun.qNs1EI.mount: Deactivated successfully. Dec 5 03:50:00 localhost podman[91820]: 2025-12-05 08:50:00.26010925 +0000 UTC m=+0.132824261 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044) Dec 5 03:50:00 localhost podman[91819]: 2025-12-05 08:50:00.30800948 +0000 UTC m=+0.183919098 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd) Dec 5 03:50:00 localhost podman[91819]: 2025-12-05 08:50:00.317643414 +0000 UTC m=+0.193553012 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:50:00 localhost podman[91820]: 2025-12-05 08:50:00.327084502 +0000 UTC m=+0.199799513 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller) Dec 5 03:50:00 localhost podman[91818]: 2025-12-05 08:50:00.22536568 +0000 UTC m=+0.103439014 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid) Dec 5 03:50:00 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:50:00 localhost podman[91817]: 2025-12-05 08:50:00.336850279 +0000 UTC m=+0.217551364 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public) Dec 5 03:50:00 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:50:00 localhost podman[91818]: 2025-12-05 08:50:00.361614985 +0000 UTC m=+0.239688319 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true) Dec 5 03:50:00 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:50:00 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:50:08 localhost systemd[1]: tmp-crun.23ZkLi.mount: Deactivated successfully. Dec 5 03:50:08 localhost podman[91903]: 2025-12-05 08:50:08.190614612 +0000 UTC m=+0.075548225 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 5 03:50:08 localhost podman[91903]: 2025-12-05 08:50:08.435815757 +0000 UTC m=+0.320749370 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 5 03:50:08 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:50:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:50:23 localhost systemd[1]: tmp-crun.YZhMmO.mount: Deactivated successfully. Dec 5 03:50:23 localhost podman[91931]: 2025-12-05 08:50:23.210471895 +0000 UTC m=+0.095127131 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, config_id=tripleo_step5, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:50:23 localhost podman[91931]: 2025-12-05 08:50:23.269736672 +0000 UTC m=+0.154391878 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute) Dec 5 03:50:23 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:50:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:50:25 localhost systemd[1]: tmp-crun.7o9Ekr.mount: Deactivated successfully. Dec 5 03:50:25 localhost podman[91959]: 2025-12-05 08:50:25.227053056 +0000 UTC m=+0.106232620 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4) Dec 5 03:50:25 localhost podman[91959]: 2025-12-05 08:50:25.267708565 +0000 UTC m=+0.146888139 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=logrotate_crond, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z) Dec 5 03:50:25 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:50:25 localhost podman[91958]: 2025-12-05 08:50:25.323198197 +0000 UTC m=+0.205224137 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com) Dec 5 03:50:25 localhost podman[91960]: 2025-12-05 08:50:25.289916872 +0000 UTC m=+0.165924159 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:50:25 localhost podman[91958]: 2025-12-05 08:50:25.357548525 +0000 UTC m=+0.239574495 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container) Dec 5 03:50:25 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:50:25 localhost podman[91960]: 2025-12-05 08:50:25.372655625 +0000 UTC m=+0.248662882 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, container_name=ceilometer_agent_ipmi) Dec 5 03:50:25 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:50:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:50:26 localhost podman[92031]: 2025-12-05 08:50:26.193838421 +0000 UTC m=+0.078726261 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 5 03:50:26 localhost systemd[1]: tmp-crun.no42DV.mount: Deactivated successfully. Dec 5 03:50:26 localhost podman[92031]: 2025-12-05 08:50:26.566683278 +0000 UTC m=+0.451571148 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public) Dec 5 03:50:26 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:50:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:50:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:50:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:50:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:50:31 localhost systemd[1]: tmp-crun.VHXCO0.mount: Deactivated successfully. Dec 5 03:50:31 localhost podman[92054]: 2025-12-05 08:50:31.263386129 +0000 UTC m=+0.148510539 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:50:31 localhost podman[92055]: 2025-12-05 08:50:31.220643495 +0000 UTC m=+0.101319470 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:50:31 localhost podman[92055]: 2025-12-05 08:50:31.301457629 +0000 UTC m=+0.182133624 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, version=17.1.12) Dec 5 03:50:31 localhost podman[92056]: 2025-12-05 08:50:31.313392983 +0000 UTC m=+0.191841340 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-collectd, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:50:31 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:50:31 localhost podman[92056]: 2025-12-05 08:50:31.322740338 +0000 UTC m=+0.201188685 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 5 03:50:31 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:50:31 localhost podman[92054]: 2025-12-05 08:50:31.361956084 +0000 UTC m=+0.247080494 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, release=1761123044, version=17.1.12, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc.) Dec 5 03:50:31 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:50:31 localhost podman[92059]: 2025-12-05 08:50:31.41565216 +0000 UTC m=+0.290183758 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12) Dec 5 03:50:31 localhost podman[92059]: 2025-12-05 08:50:31.444804379 +0000 UTC m=+0.319336017 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 03:50:31 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Deactivated successfully. Dec 5 03:50:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:50:39 localhost systemd[1]: tmp-crun.0OIlKK.mount: Deactivated successfully. Dec 5 03:50:39 localhost podman[92139]: 2025-12-05 08:50:39.203466101 +0000 UTC m=+0.090027736 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, io.openshift.expose-services=) Dec 5 03:50:39 localhost podman[92139]: 2025-12-05 08:50:39.424751367 +0000 UTC m=+0.311312972 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Dec 5 03:50:39 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:50:53 localhost sshd[92168]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:50:54 localhost podman[92170]: 2025-12-05 08:50:54.20094477 +0000 UTC m=+0.085990024 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 5 03:50:54 localhost podman[92170]: 2025-12-05 08:50:54.234722031 +0000 UTC m=+0.119767275 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:50:54 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:50:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:50:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:50:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:50:56 localhost systemd[1]: tmp-crun.CtqlIc.mount: Deactivated successfully. Dec 5 03:50:56 localhost podman[92197]: 2025-12-05 08:50:56.212297193 +0000 UTC m=+0.095364389 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:50:56 localhost podman[92197]: 2025-12-05 08:50:56.223696421 +0000 UTC m=+0.106763657 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 5 03:50:56 localhost podman[92198]: 2025-12-05 08:50:56.255952824 +0000 UTC m=+0.133160542 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4) Dec 5 03:50:56 localhost podman[92196]: 2025-12-05 08:50:56.184931849 +0000 UTC m=+0.074115983 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-type=git, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:50:56 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:50:56 localhost podman[92198]: 2025-12-05 08:50:56.311746846 +0000 UTC m=+0.188954564 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:50:56 localhost podman[92196]: 2025-12-05 08:50:56.318561985 +0000 UTC m=+0.207746099 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 5 03:50:56 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:50:56 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:50:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:50:57 localhost systemd[1]: tmp-crun.zkQzsP.mount: Deactivated successfully. Dec 5 03:50:57 localhost podman[92268]: 2025-12-05 08:50:57.191555924 +0000 UTC m=+0.077459633 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:50:57 localhost podman[92268]: 2025-12-05 08:50:57.557781695 +0000 UTC m=+0.443685424 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_migration_target, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Dec 5 03:50:57 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:51:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:51:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:51:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:51:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:51:02 localhost systemd[1]: tmp-crun.N33BnB.mount: Deactivated successfully. Dec 5 03:51:02 localhost systemd[1]: tmp-crun.9anffl.mount: Deactivated successfully. Dec 5 03:51:02 localhost podman[92400]: 2025-12-05 08:51:02.207394562 +0000 UTC m=+0.083942792 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=iscsid, architecture=x86_64) Dec 5 03:51:02 localhost podman[92404]: 2025-12-05 08:51:02.23519557 +0000 UTC m=+0.099725423 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller) Dec 5 03:51:02 localhost podman[92400]: 2025-12-05 08:51:02.295558521 +0000 UTC m=+0.172106741 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=iscsid, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 5 03:51:02 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:51:02 localhost podman[92401]: 2025-12-05 08:51:02.261983166 +0000 UTC m=+0.130906044 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:51:02 localhost podman[92404]: 2025-12-05 08:51:02.365134093 +0000 UTC m=+0.229664376 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com) Dec 5 03:51:02 localhost podman[92404]: unhealthy Dec 5 03:51:02 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:51:02 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:51:02 localhost podman[92401]: 2025-12-05 08:51:02.39717297 +0000 UTC m=+0.266095898 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:51:02 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:51:02 localhost podman[92399]: 2025-12-05 08:51:02.368598959 +0000 UTC m=+0.245604592 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1761123044, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:51:02 localhost podman[92399]: 2025-12-05 08:51:02.451273171 +0000 UTC m=+0.328278754 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 03:51:02 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:51:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:51:10 localhost podman[92504]: 2025-12-05 08:51:10.209641532 +0000 UTC m=+0.092677578 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 03:51:10 localhost podman[92504]: 2025-12-05 08:51:10.417655117 +0000 UTC m=+0.300691203 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 5 03:51:10 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:51:25 localhost podman[92533]: 2025-12-05 08:51:25.197419255 +0000 UTC m=+0.084458667 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible) Dec 5 03:51:25 localhost podman[92533]: 2025-12-05 08:51:25.229205005 +0000 UTC m=+0.116244407 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, container_name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:51:25 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:51:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:51:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:51:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:51:27 localhost systemd[1]: tmp-crun.kPw8qO.mount: Deactivated successfully. Dec 5 03:51:27 localhost podman[92558]: 2025-12-05 08:51:27.217350399 +0000 UTC m=+0.101925060 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:51:27 localhost podman[92559]: 2025-12-05 08:51:27.263905409 +0000 UTC m=+0.144856069 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, architecture=x86_64, name=rhosp17/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4) Dec 5 03:51:27 localhost podman[92558]: 2025-12-05 08:51:27.277607177 +0000 UTC m=+0.162181778 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible) Dec 5 03:51:27 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:51:27 localhost podman[92559]: 2025-12-05 08:51:27.302671612 +0000 UTC m=+0.183622202 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Dec 5 03:51:27 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:51:27 localhost podman[92560]: 2025-12-05 08:51:27.37310151 +0000 UTC m=+0.245344575 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:51:27 localhost podman[92560]: 2025-12-05 08:51:27.403744835 +0000 UTC m=+0.275987860 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:51:27 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:51:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:51:28 localhost podman[92633]: 2025-12-05 08:51:28.201562741 +0000 UTC m=+0.088088699 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:51:28 localhost podman[92633]: 2025-12-05 08:51:28.574589449 +0000 UTC m=+0.461115477 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=nova_migration_target, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container) Dec 5 03:51:28 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:51:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:51:33 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:51:33 localhost recover_tripleo_nova_virtqemud[92677]: 61294 Dec 5 03:51:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:51:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:51:33 localhost systemd[1]: tmp-crun.RO3KKR.mount: Deactivated successfully. Dec 5 03:51:33 localhost systemd[1]: tmp-crun.SNhmY5.mount: Deactivated successfully. Dec 5 03:51:33 localhost podman[92657]: 2025-12-05 08:51:33.222649699 +0000 UTC m=+0.103890800 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 5 03:51:33 localhost podman[92659]: 2025-12-05 08:51:33.197932174 +0000 UTC m=+0.078837795 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, tcib_managed=true, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Dec 5 03:51:33 localhost podman[92657]: 2025-12-05 08:51:33.258782291 +0000 UTC m=+0.140023502 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:51:33 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Deactivated successfully. Dec 5 03:51:33 localhost podman[92659]: 2025-12-05 08:51:33.281600937 +0000 UTC m=+0.162506558 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, vcs-type=git) Dec 5 03:51:33 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:51:33 localhost podman[92660]: 2025-12-05 08:51:33.324086182 +0000 UTC m=+0.195863055 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller) Dec 5 03:51:33 localhost podman[92658]: 2025-12-05 08:51:33.330346863 +0000 UTC m=+0.209501340 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Dec 5 03:51:33 localhost podman[92658]: 2025-12-05 08:51:33.369800678 +0000 UTC m=+0.248955205 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044) Dec 5 03:51:33 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:51:33 localhost podman[92660]: 2025-12-05 08:51:33.422169194 +0000 UTC m=+0.293946107 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 5 03:51:33 localhost podman[92660]: unhealthy Dec 5 03:51:33 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:51:33 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:51:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:51:41 localhost systemd[1]: tmp-crun.yJaF4n.mount: Deactivated successfully. Dec 5 03:51:41 localhost podman[92750]: 2025-12-05 08:51:41.218581539 +0000 UTC m=+0.102082535 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git) Dec 5 03:51:41 localhost podman[92750]: 2025-12-05 08:51:41.427849842 +0000 UTC m=+0.311350888 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container) Dec 5 03:51:41 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:51:56 localhost podman[92781]: 2025-12-05 08:51:56.201459121 +0000 UTC m=+0.085682925 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public) Dec 5 03:51:56 localhost podman[92781]: 2025-12-05 08:51:56.239906023 +0000 UTC m=+0.124129827 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:51:56 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:51:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:51:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:51:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:51:58 localhost systemd[1]: tmp-crun.Af7eqn.mount: Deactivated successfully. Dec 5 03:51:58 localhost podman[92805]: 2025-12-05 08:51:58.215581648 +0000 UTC m=+0.099515467 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4) Dec 5 03:51:58 localhost podman[92805]: 2025-12-05 08:51:58.270499702 +0000 UTC m=+0.154433471 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Dec 5 03:51:58 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:51:58 localhost podman[92806]: 2025-12-05 08:51:58.355944999 +0000 UTC m=+0.238252208 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, container_name=logrotate_crond, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1) Dec 5 03:51:58 localhost podman[92806]: 2025-12-05 08:51:58.368930935 +0000 UTC m=+0.251238114 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-cron, release=1761123044, container_name=logrotate_crond, url=https://www.redhat.com) Dec 5 03:51:58 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:51:58 localhost podman[92807]: 2025-12-05 08:51:58.4615683 +0000 UTC m=+0.339348932 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:51:58 localhost podman[92807]: 2025-12-05 08:51:58.514887827 +0000 UTC m=+0.392668489 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:51:58 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:51:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:51:59 localhost systemd[1]: tmp-crun.OEq44L.mount: Deactivated successfully. Dec 5 03:51:59 localhost podman[92879]: 2025-12-05 08:51:59.208604367 +0000 UTC m=+0.092415170 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64) Dec 5 03:51:59 localhost podman[92879]: 2025-12-05 08:51:59.592981272 +0000 UTC m=+0.476792125 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, release=1761123044, container_name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1) Dec 5 03:51:59 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:52:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:52:04 localhost systemd[1]: tmp-crun.FVerk4.mount: Deactivated successfully. Dec 5 03:52:04 localhost podman[92965]: 2025-12-05 08:52:04.204393733 +0000 UTC m=+0.081967831 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 5 03:52:04 localhost podman[92964]: 2025-12-05 08:52:04.251048286 +0000 UTC m=+0.130402318 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible) Dec 5 03:52:04 localhost podman[92965]: 2025-12-05 08:52:04.267163068 +0000 UTC m=+0.144737186 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:52:04 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:52:04 localhost podman[92964]: 2025-12-05 08:52:04.28394301 +0000 UTC m=+0.163297012 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, tcib_managed=true, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:52:04 localhost podman[92963]: 2025-12-05 08:52:04.314610936 +0000 UTC m=+0.191705120 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:52:04 localhost podman[92963]: 2025-12-05 08:52:04.355585955 +0000 UTC m=+0.232680129 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:52:04 localhost podman[92963]: unhealthy Dec 5 03:52:04 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:52:04 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:52:04 localhost podman[92966]: 2025-12-05 08:52:04.365804217 +0000 UTC m=+0.238923529 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z) Dec 5 03:52:04 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:52:04 localhost podman[92966]: 2025-12-05 08:52:04.404687133 +0000 UTC m=+0.277806405 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:52:04 localhost podman[92966]: unhealthy Dec 5 03:52:04 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:52:04 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 03:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 03:52:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:52:12 localhost podman[93057]: 2025-12-05 08:52:12.209273566 +0000 UTC m=+0.092737470 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z) Dec 5 03:52:12 localhost podman[93057]: 2025-12-05 08:52:12.416522518 +0000 UTC m=+0.299986402 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:52:12 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:52:27 localhost podman[93086]: 2025-12-05 08:52:27.213964613 +0000 UTC m=+0.093279606 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-nova-compute) Dec 5 03:52:27 localhost podman[93086]: 2025-12-05 08:52:27.244630888 +0000 UTC m=+0.123945901 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:52:27 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:52:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:52:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:52:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:52:29 localhost podman[93112]: 2025-12-05 08:52:29.217835176 +0000 UTC m=+0.099986680 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 5 03:52:29 localhost systemd[1]: tmp-crun.nd5uXN.mount: Deactivated successfully. Dec 5 03:52:29 localhost podman[93113]: 2025-12-05 08:52:29.263095407 +0000 UTC m=+0.140768755 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:52:29 localhost podman[93113]: 2025-12-05 08:52:29.301907911 +0000 UTC m=+0.179581259 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:52:29 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:52:29 localhost podman[93114]: 2025-12-05 08:52:29.321447207 +0000 UTC m=+0.192826423 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12) Dec 5 03:52:29 localhost podman[93112]: 2025-12-05 08:52:29.340025754 +0000 UTC m=+0.222177308 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:52:29 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:52:29 localhost podman[93114]: 2025-12-05 08:52:29.380880691 +0000 UTC m=+0.252259997 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 5 03:52:29 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:52:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:52:30 localhost podman[93184]: 2025-12-05 08:52:30.195010013 +0000 UTC m=+0.072189723 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 5 03:52:30 localhost systemd[1]: tmp-crun.sU9R9Z.mount: Deactivated successfully. Dec 5 03:52:30 localhost podman[93184]: 2025-12-05 08:52:30.603806823 +0000 UTC m=+0.480986543 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4) Dec 5 03:52:30 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:52:35 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:52:35 localhost recover_tripleo_nova_virtqemud[93229]: 61294 Dec 5 03:52:35 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:52:35 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:52:35 localhost systemd[1]: tmp-crun.29kwE8.mount: Deactivated successfully. Dec 5 03:52:35 localhost systemd[1]: tmp-crun.c6xEO9.mount: Deactivated successfully. Dec 5 03:52:35 localhost podman[93216]: 2025-12-05 08:52:35.241343352 +0000 UTC m=+0.108232663 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:52:35 localhost podman[93209]: 2025-12-05 08:52:35.197494684 +0000 UTC m=+0.076727142 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git) Dec 5 03:52:35 localhost podman[93208]: 2025-12-05 08:52:35.259383892 +0000 UTC m=+0.140945770 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 03:52:35 localhost podman[93208]: 2025-12-05 08:52:35.280737723 +0000 UTC m=+0.162299641 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 5 03:52:35 localhost podman[93209]: 2025-12-05 08:52:35.283685853 +0000 UTC m=+0.162918321 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:52:35 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:52:35 localhost podman[93216]: 2025-12-05 08:52:35.324642682 +0000 UTC m=+0.191531923 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 5 03:52:35 localhost podman[93216]: unhealthy Dec 5 03:52:35 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:52:35 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:52:35 localhost podman[93208]: unhealthy Dec 5 03:52:35 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:52:35 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:52:35 localhost podman[93210]: 2025-12-05 08:52:35.427365896 +0000 UTC m=+0.303071546 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:52:35 localhost podman[93210]: 2025-12-05 08:52:35.440649761 +0000 UTC m=+0.316355391 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 5 03:52:35 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:52:43 localhost podman[93286]: 2025-12-05 08:52:43.200707666 +0000 UTC m=+0.085686335 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:52:43 localhost podman[93286]: 2025-12-05 08:52:43.431813035 +0000 UTC m=+0.316791724 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:52:43 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:52:58 localhost systemd[1]: tmp-crun.FZtwwL.mount: Deactivated successfully. Dec 5 03:52:58 localhost podman[93314]: 2025-12-05 08:52:58.197651977 +0000 UTC m=+0.079945819 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 5 03:52:58 localhost podman[93314]: 2025-12-05 08:52:58.222404182 +0000 UTC m=+0.104698034 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:52:58 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:53:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:53:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:53:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:53:00 localhost podman[93341]: 2025-12-05 08:53:00.203659456 +0000 UTC m=+0.084103476 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Dec 5 03:53:00 localhost podman[93341]: 2025-12-05 08:53:00.239577722 +0000 UTC m=+0.120021742 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4) Dec 5 03:53:00 localhost systemd[1]: tmp-crun.t4tJTt.mount: Deactivated successfully. Dec 5 03:53:00 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:53:00 localhost podman[93340]: 2025-12-05 08:53:00.262290054 +0000 UTC m=+0.146330884 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:53:00 localhost podman[93340]: 2025-12-05 08:53:00.293714063 +0000 UTC m=+0.177754883 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:53:00 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:53:00 localhost podman[93342]: 2025-12-05 08:53:00.324829302 +0000 UTC m=+0.203030624 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:53:00 localhost podman[93342]: 2025-12-05 08:53:00.355619821 +0000 UTC m=+0.233821133 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:53:00 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:53:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:53:01 localhost podman[93410]: 2025-12-05 08:53:01.202323579 +0000 UTC m=+0.083525339 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:53:01 localhost podman[93410]: 2025-12-05 08:53:01.576940735 +0000 UTC m=+0.458142485 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, container_name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 03:53:01 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:53:06 localhost systemd[1]: tmp-crun.6fD5XL.mount: Deactivated successfully. Dec 5 03:53:06 localhost podman[93440]: 2025-12-05 08:53:06.21533262 +0000 UTC m=+0.088699797 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true) Dec 5 03:53:06 localhost podman[93434]: 2025-12-05 08:53:06.285196641 +0000 UTC m=+0.163860108 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step3) Dec 5 03:53:06 localhost podman[93434]: 2025-12-05 08:53:06.29399962 +0000 UTC m=+0.172663137 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, version=17.1.12, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:53:06 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:53:06 localhost podman[93433]: 2025-12-05 08:53:06.254371091 +0000 UTC m=+0.136941478 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Dec 5 03:53:06 localhost podman[93433]: 2025-12-05 08:53:06.340747716 +0000 UTC m=+0.223318063 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Dec 5 03:53:06 localhost podman[93432]: 2025-12-05 08:53:06.299125976 +0000 UTC m=+0.186340964 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:53:06 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:53:06 localhost podman[93432]: 2025-12-05 08:53:06.383866691 +0000 UTC m=+0.271081669 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:53:06 localhost podman[93432]: unhealthy Dec 5 03:53:06 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:53:06 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:53:06 localhost podman[93440]: 2025-12-05 08:53:06.394847046 +0000 UTC m=+0.268214213 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, version=17.1.12, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com) Dec 5 03:53:06 localhost podman[93440]: unhealthy Dec 5 03:53:06 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:53:06 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:53:11 localhost podman[93643]: Dec 5 03:53:11 localhost podman[93643]: 2025-12-05 08:53:11.045333171 +0000 UTC m=+0.082306921 container create 40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_chatelet, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 03:53:11 localhost systemd[1]: Started libpod-conmon-40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496.scope. Dec 5 03:53:11 localhost systemd[1]: Started libcrun container. Dec 5 03:53:11 localhost podman[93643]: 2025-12-05 08:53:11.010790458 +0000 UTC m=+0.047764248 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 03:53:11 localhost podman[93643]: 2025-12-05 08:53:11.124689222 +0000 UTC m=+0.161662962 container init 40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_chatelet, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1763362218, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64) Dec 5 03:53:11 localhost systemd[1]: libpod-40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496.scope: Deactivated successfully. Dec 5 03:53:11 localhost podman[93643]: 2025-12-05 08:53:11.136999567 +0000 UTC m=+0.173973317 container start 40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_chatelet, name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:53:11 localhost great_chatelet[93658]: 167 167 Dec 5 03:53:11 localhost podman[93643]: 2025-12-05 08:53:11.138486733 +0000 UTC m=+0.175460473 container attach 40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_chatelet, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7) Dec 5 03:53:11 localhost podman[93643]: 2025-12-05 08:53:11.141061251 +0000 UTC m=+0.178035021 container died 40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_chatelet, distribution-scope=public, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1763362218, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 03:53:11 localhost podman[93663]: 2025-12-05 08:53:11.249344094 +0000 UTC m=+0.096335989 container remove 40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_chatelet, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True) Dec 5 03:53:11 localhost systemd[1]: libpod-conmon-40de0da4e916e5072ce9aba9c72e6ae428ca689025fbf3f7a9c6bb6903be1496.scope: Deactivated successfully. Dec 5 03:53:11 localhost podman[93683]: Dec 5 03:53:11 localhost podman[93683]: 2025-12-05 08:53:11.46162777 +0000 UTC m=+0.075905647 container create 8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_knuth, distribution-scope=public, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:53:11 localhost systemd[1]: Started libpod-conmon-8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5.scope. Dec 5 03:53:11 localhost systemd[1]: Started libcrun container. Dec 5 03:53:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdba4f083bc33121ad64d05ede56cb0ff757c2e9873e6802708e0af6df223b96/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 03:53:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdba4f083bc33121ad64d05ede56cb0ff757c2e9873e6802708e0af6df223b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 03:53:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdba4f083bc33121ad64d05ede56cb0ff757c2e9873e6802708e0af6df223b96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 03:53:11 localhost podman[93683]: 2025-12-05 08:53:11.52819568 +0000 UTC m=+0.142473547 container init 8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_knuth, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc.) Dec 5 03:53:11 localhost podman[93683]: 2025-12-05 08:53:11.431435858 +0000 UTC m=+0.045713765 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 03:53:11 localhost podman[93683]: 2025-12-05 08:53:11.538161354 +0000 UTC m=+0.152439241 container start 8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_knuth, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph) Dec 5 03:53:11 localhost podman[93683]: 2025-12-05 08:53:11.538569716 +0000 UTC m=+0.152847593 container attach 8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_knuth, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z) Dec 5 03:53:12 localhost systemd[1]: var-lib-containers-storage-overlay-f979359700b1b39f5e936c2c259d6588b3650c0c448965f2a8a413ff032015eb-merged.mount: Deactivated successfully. Dec 5 03:53:12 localhost elegant_knuth[93698]: [ Dec 5 03:53:12 localhost elegant_knuth[93698]: { Dec 5 03:53:12 localhost elegant_knuth[93698]: "available": false, Dec 5 03:53:12 localhost elegant_knuth[93698]: "ceph_device": false, Dec 5 03:53:12 localhost elegant_knuth[93698]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 03:53:12 localhost elegant_knuth[93698]: "lsm_data": {}, Dec 5 03:53:12 localhost elegant_knuth[93698]: "lvs": [], Dec 5 03:53:12 localhost elegant_knuth[93698]: "path": "/dev/sr0", Dec 5 03:53:12 localhost elegant_knuth[93698]: "rejected_reasons": [ Dec 5 03:53:12 localhost elegant_knuth[93698]: "Has a FileSystem", Dec 5 03:53:12 localhost elegant_knuth[93698]: "Insufficient space (<5GB)" Dec 5 03:53:12 localhost elegant_knuth[93698]: ], Dec 5 03:53:12 localhost elegant_knuth[93698]: "sys_api": { Dec 5 03:53:12 localhost elegant_knuth[93698]: "actuators": null, Dec 5 03:53:12 localhost elegant_knuth[93698]: "device_nodes": "sr0", Dec 5 03:53:12 localhost elegant_knuth[93698]: "human_readable_size": "482.00 KB", Dec 5 03:53:12 localhost elegant_knuth[93698]: "id_bus": "ata", Dec 5 03:53:12 localhost elegant_knuth[93698]: "model": "QEMU DVD-ROM", Dec 5 03:53:12 localhost elegant_knuth[93698]: "nr_requests": "2", Dec 5 03:53:12 localhost elegant_knuth[93698]: "partitions": {}, Dec 5 03:53:12 localhost elegant_knuth[93698]: "path": "/dev/sr0", Dec 5 03:53:12 localhost elegant_knuth[93698]: "removable": "1", Dec 5 03:53:12 localhost elegant_knuth[93698]: "rev": "2.5+", Dec 5 03:53:12 localhost elegant_knuth[93698]: "ro": "0", Dec 5 03:53:12 localhost elegant_knuth[93698]: "rotational": "1", Dec 5 03:53:12 localhost elegant_knuth[93698]: "sas_address": "", Dec 5 03:53:12 localhost elegant_knuth[93698]: "sas_device_handle": "", Dec 5 03:53:12 localhost elegant_knuth[93698]: "scheduler_mode": "mq-deadline", Dec 5 03:53:12 localhost elegant_knuth[93698]: "sectors": 0, Dec 5 03:53:12 localhost elegant_knuth[93698]: "sectorsize": "2048", Dec 5 03:53:12 localhost elegant_knuth[93698]: "size": 493568.0, Dec 5 03:53:12 localhost elegant_knuth[93698]: "support_discard": "0", Dec 5 03:53:12 localhost elegant_knuth[93698]: "type": "disk", Dec 5 03:53:12 localhost elegant_knuth[93698]: "vendor": "QEMU" Dec 5 03:53:12 localhost elegant_knuth[93698]: } Dec 5 03:53:12 localhost elegant_knuth[93698]: } Dec 5 03:53:12 localhost elegant_knuth[93698]: ] Dec 5 03:53:12 localhost systemd[1]: libpod-8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5.scope: Deactivated successfully. Dec 5 03:53:12 localhost podman[93683]: 2025-12-05 08:53:12.774985761 +0000 UTC m=+1.389263608 container died 8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_knuth, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, release=1763362218, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Dec 5 03:53:12 localhost systemd[1]: libpod-8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5.scope: Consumed 1.308s CPU time. Dec 5 03:53:12 localhost systemd[1]: var-lib-containers-storage-overlay-bdba4f083bc33121ad64d05ede56cb0ff757c2e9873e6802708e0af6df223b96-merged.mount: Deactivated successfully. Dec 5 03:53:12 localhost podman[95796]: 2025-12-05 08:53:12.888962936 +0000 UTC m=+0.099150104 container remove 8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_knuth, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 03:53:12 localhost systemd[1]: libpod-conmon-8f64494ca560c03186f4435b190202c8c36b1c5ae3018feb65593761e19a17a5.scope: Deactivated successfully. Dec 5 03:53:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:53:14 localhost podman[95826]: 2025-12-05 08:53:14.24400559 +0000 UTC m=+0.120206188 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:53:14 localhost podman[95826]: 2025-12-05 08:53:14.438452651 +0000 UTC m=+0.314653259 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:53:14 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:53:29 localhost podman[95853]: 2025-12-05 08:53:29.20414717 +0000 UTC m=+0.089337976 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Dec 5 03:53:29 localhost podman[95853]: 2025-12-05 08:53:29.263844182 +0000 UTC m=+0.149035028 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true) Dec 5 03:53:29 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:53:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:53:31 localhost systemd[1]: tmp-crun.zT1KXM.mount: Deactivated successfully. Dec 5 03:53:31 localhost podman[95881]: 2025-12-05 08:53:31.203309331 +0000 UTC m=+0.084271051 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:53:31 localhost podman[95881]: 2025-12-05 08:53:31.211665576 +0000 UTC m=+0.092627256 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:53:31 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:53:31 localhost podman[95882]: 2025-12-05 08:53:31.260393292 +0000 UTC m=+0.138670780 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, tcib_managed=true, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 03:53:31 localhost podman[95880]: 2025-12-05 08:53:31.311931365 +0000 UTC m=+0.195786464 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 5 03:53:31 localhost podman[95882]: 2025-12-05 08:53:31.365750776 +0000 UTC m=+0.244028314 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, release=1761123044, container_name=ceilometer_agent_ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 03:53:31 localhost podman[95880]: 2025-12-05 08:53:31.372879654 +0000 UTC m=+0.256734763 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:53:31 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:53:31 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:53:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:53:32 localhost podman[95951]: 2025-12-05 08:53:32.198464226 +0000 UTC m=+0.084767186 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64) Dec 5 03:53:32 localhost podman[95951]: 2025-12-05 08:53:32.598944462 +0000 UTC m=+0.485247422 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:53:32 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:53:37 localhost podman[95976]: 2025-12-05 08:53:37.254890901 +0000 UTC m=+0.140722833 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 5 03:53:37 localhost podman[95975]: 2025-12-05 08:53:37.217967495 +0000 UTC m=+0.102865659 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc.) Dec 5 03:53:37 localhost podman[95977]: 2025-12-05 08:53:37.183300408 +0000 UTC m=+0.067582462 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:53:37 localhost podman[95974]: 2025-12-05 08:53:37.306149715 +0000 UTC m=+0.198215447 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ovn_metadata_agent) Dec 5 03:53:37 localhost podman[95977]: 2025-12-05 08:53:37.315024806 +0000 UTC m=+0.199306850 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 5 03:53:37 localhost podman[95977]: unhealthy Dec 5 03:53:37 localhost podman[95974]: 2025-12-05 08:53:37.322166654 +0000 UTC m=+0.214232376 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:53:37 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:53:37 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:53:37 localhost podman[95974]: unhealthy Dec 5 03:53:37 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:53:37 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:53:37 localhost podman[95976]: 2025-12-05 08:53:37.36793464 +0000 UTC m=+0.253766512 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:53:37 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:53:37 localhost podman[95975]: 2025-12-05 08:53:37.448172877 +0000 UTC m=+0.333070961 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, container_name=iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:53:37 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:53:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:53:45 localhost systemd[1]: tmp-crun.Rf35fM.mount: Deactivated successfully. Dec 5 03:53:45 localhost podman[96049]: 2025-12-05 08:53:45.211688886 +0000 UTC m=+0.098273259 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=metrics_qdr, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd) Dec 5 03:53:45 localhost podman[96049]: 2025-12-05 08:53:45.430597094 +0000 UTC m=+0.317181447 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:53:45 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:54:00 localhost systemd[1]: tmp-crun.QPEdOk.mount: Deactivated successfully. Dec 5 03:54:00 localhost podman[96078]: 2025-12-05 08:54:00.190462693 +0000 UTC m=+0.076654119 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:54:00 localhost podman[96078]: 2025-12-05 08:54:00.243780189 +0000 UTC m=+0.129971655 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public) Dec 5 03:54:00 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:54:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:54:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:54:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:54:02 localhost systemd[1]: tmp-crun.c5025m.mount: Deactivated successfully. Dec 5 03:54:02 localhost podman[96105]: 2025-12-05 08:54:02.199572178 +0000 UTC m=+0.084879521 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true) Dec 5 03:54:02 localhost systemd[1]: tmp-crun.qsSthi.mount: Deactivated successfully. Dec 5 03:54:02 localhost podman[96106]: 2025-12-05 08:54:02.270033397 +0000 UTC m=+0.150859863 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vcs-type=git, version=17.1.12) Dec 5 03:54:02 localhost podman[96106]: 2025-12-05 08:54:02.277845825 +0000 UTC m=+0.158672291 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, release=1761123044, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Dec 5 03:54:02 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:54:02 localhost podman[96105]: 2025-12-05 08:54:02.331651046 +0000 UTC m=+0.216958349 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12) Dec 5 03:54:02 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:54:02 localhost podman[96107]: 2025-12-05 08:54:02.423941951 +0000 UTC m=+0.301801177 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:54:02 localhost podman[96107]: 2025-12-05 08:54:02.478523796 +0000 UTC m=+0.356382982 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:54:02 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:54:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:54:03 localhost podman[96176]: 2025-12-05 08:54:03.20315852 +0000 UTC m=+0.083093906 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:54:03 localhost podman[96176]: 2025-12-05 08:54:03.60706005 +0000 UTC m=+0.486995386 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 5 03:54:03 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:54:08 localhost systemd[1]: tmp-crun.Li72b4.mount: Deactivated successfully. Dec 5 03:54:08 localhost podman[96201]: 2025-12-05 08:54:08.21480698 +0000 UTC m=+0.096925858 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4) Dec 5 03:54:08 localhost systemd[1]: tmp-crun.JmZ2p6.mount: Deactivated successfully. Dec 5 03:54:08 localhost podman[96200]: 2025-12-05 08:54:08.26232944 +0000 UTC m=+0.146716487 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z) Dec 5 03:54:08 localhost podman[96201]: 2025-12-05 08:54:08.279201074 +0000 UTC m=+0.161319902 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 5 03:54:08 localhost podman[96200]: 2025-12-05 08:54:08.30757697 +0000 UTC m=+0.191964007 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=) Dec 5 03:54:08 localhost podman[96200]: unhealthy Dec 5 03:54:08 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:54:08 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:54:08 localhost podman[96205]: 2025-12-05 08:54:08.326668572 +0000 UTC m=+0.201462496 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true) Dec 5 03:54:08 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:54:08 localhost podman[96205]: 2025-12-05 08:54:08.368139227 +0000 UTC m=+0.242933101 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, release=1761123044) Dec 5 03:54:08 localhost podman[96205]: unhealthy Dec 5 03:54:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:54:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:54:08 localhost podman[96202]: 2025-12-05 08:54:08.425049263 +0000 UTC m=+0.300763986 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible) Dec 5 03:54:08 localhost podman[96202]: 2025-12-05 08:54:08.438735661 +0000 UTC m=+0.314450374 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, release=1761123044) Dec 5 03:54:08 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:54:15 localhost podman[96410]: 2025-12-05 08:54:15.760308971 +0000 UTC m=+0.093277036 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:54:15 localhost podman[96410]: 2025-12-05 08:54:15.955570197 +0000 UTC m=+0.288538192 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:54:15 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:54:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:54:29 localhost recover_tripleo_nova_virtqemud[96438]: 61294 Dec 5 03:54:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:54:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:54:31 localhost podman[96439]: 2025-12-05 08:54:31.193457719 +0000 UTC m=+0.075889276 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step5, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:54:31 localhost podman[96439]: 2025-12-05 08:54:31.228436006 +0000 UTC m=+0.110867573 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:54:31 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:54:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:54:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:54:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:54:33 localhost podman[96467]: 2025-12-05 08:54:33.211107254 +0000 UTC m=+0.092022298 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:54:33 localhost podman[96467]: 2025-12-05 08:54:33.218609012 +0000 UTC m=+0.099524086 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=logrotate_crond, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 5 03:54:33 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:54:33 localhost podman[96468]: 2025-12-05 08:54:33.312321061 +0000 UTC m=+0.187152440 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12) Dec 5 03:54:33 localhost podman[96468]: 2025-12-05 08:54:33.344589915 +0000 UTC m=+0.219421274 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 5 03:54:33 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:54:33 localhost podman[96466]: 2025-12-05 08:54:33.382572434 +0000 UTC m=+0.265098128 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:54:33 localhost podman[96466]: 2025-12-05 08:54:33.413866148 +0000 UTC m=+0.296391932 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1) Dec 5 03:54:33 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:54:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:54:34 localhost systemd[1]: tmp-crun.kqOwSx.mount: Deactivated successfully. Dec 5 03:54:34 localhost podman[96538]: 2025-12-05 08:54:34.210180298 +0000 UTC m=+0.089050987 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute) Dec 5 03:54:34 localhost podman[96538]: 2025-12-05 08:54:34.608825599 +0000 UTC m=+0.487696288 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, config_id=tripleo_step4) Dec 5 03:54:34 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:54:39 localhost systemd[1]: tmp-crun.n4qSku.mount: Deactivated successfully. Dec 5 03:54:39 localhost podman[96561]: 2025-12-05 08:54:39.221765855 +0000 UTC m=+0.110170261 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Dec 5 03:54:39 localhost podman[96562]: 2025-12-05 08:54:39.264934762 +0000 UTC m=+0.144821648 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:54:39 localhost podman[96562]: 2025-12-05 08:54:39.275770652 +0000 UTC m=+0.155657608 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid) Dec 5 03:54:39 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:54:39 localhost podman[96564]: 2025-12-05 08:54:39.324296083 +0000 UTC m=+0.200483987 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 5 03:54:39 localhost podman[96564]: 2025-12-05 08:54:39.33370356 +0000 UTC m=+0.209891464 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, release=1761123044, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 5 03:54:39 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:54:39 localhost podman[96561]: 2025-12-05 08:54:39.389803311 +0000 UTC m=+0.278207797 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64) Dec 5 03:54:39 localhost podman[96561]: unhealthy Dec 5 03:54:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:54:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:54:39 localhost podman[96569]: 2025-12-05 08:54:39.47598928 +0000 UTC m=+0.349623316 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public) Dec 5 03:54:39 localhost podman[96569]: 2025-12-05 08:54:39.515403392 +0000 UTC m=+0.389037458 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:54:39 localhost podman[96569]: unhealthy Dec 5 03:54:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:54:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:54:46 localhost podman[96637]: 2025-12-05 08:54:46.2077813 +0000 UTC m=+0.093026699 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:54:46 localhost podman[96637]: 2025-12-05 08:54:46.426731638 +0000 UTC m=+0.311977067 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:54:46 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:55:02 localhost systemd[1]: tmp-crun.6Z8KtQ.mount: Deactivated successfully. Dec 5 03:55:02 localhost podman[96667]: 2025-12-05 08:55:02.198157212 +0000 UTC m=+0.084471368 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step5, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 5 03:55:02 localhost podman[96667]: 2025-12-05 08:55:02.227149306 +0000 UTC m=+0.113463462 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 5 03:55:02 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:55:04 localhost systemd[1]: tmp-crun.sMFaIA.mount: Deactivated successfully. Dec 5 03:55:04 localhost podman[96695]: 2025-12-05 08:55:04.193612009 +0000 UTC m=+0.075344589 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4) Dec 5 03:55:04 localhost podman[96694]: 2025-12-05 08:55:04.205310466 +0000 UTC m=+0.086510779 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z) Dec 5 03:55:04 localhost podman[96694]: 2025-12-05 08:55:04.216746885 +0000 UTC m=+0.097947208 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc.) Dec 5 03:55:04 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:55:04 localhost podman[96693]: 2025-12-05 08:55:04.261870562 +0000 UTC m=+0.144534540 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:55:04 localhost podman[96695]: 2025-12-05 08:55:04.271916838 +0000 UTC m=+0.153649428 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:55:04 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:55:04 localhost podman[96693]: 2025-12-05 08:55:04.292730863 +0000 UTC m=+0.175394861 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:55:04 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:55:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:55:05 localhost podman[96766]: 2025-12-05 08:55:05.211123326 +0000 UTC m=+0.096363490 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 5 03:55:05 localhost podman[96766]: 2025-12-05 08:55:05.563654499 +0000 UTC m=+0.448894693 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, maintainer=OpenStack TripleO Team) Dec 5 03:55:05 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:55:10 localhost systemd[1]: tmp-crun.INGc0E.mount: Deactivated successfully. Dec 5 03:55:10 localhost podman[96790]: 2025-12-05 08:55:10.214832914 +0000 UTC m=+0.097277848 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container) Dec 5 03:55:10 localhost podman[96789]: 2025-12-05 08:55:10.255861785 +0000 UTC m=+0.141926019 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:55:10 localhost podman[96790]: 2025-12-05 08:55:10.26128076 +0000 UTC m=+0.143725654 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=iscsid) Dec 5 03:55:10 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:55:10 localhost podman[96789]: 2025-12-05 08:55:10.301670513 +0000 UTC m=+0.187734697 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:55:10 localhost podman[96789]: unhealthy Dec 5 03:55:10 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:55:10 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:55:10 localhost podman[96792]: 2025-12-05 08:55:10.303359534 +0000 UTC m=+0.181153706 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:55:10 localhost podman[96791]: 2025-12-05 08:55:10.367324645 +0000 UTC m=+0.245611793 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, release=1761123044, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:55:10 localhost podman[96791]: 2025-12-05 08:55:10.374603968 +0000 UTC m=+0.252891156 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public) Dec 5 03:55:10 localhost podman[96792]: 2025-12-05 08:55:10.387868121 +0000 UTC m=+0.265662273 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:55:10 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:55:10 localhost podman[96792]: unhealthy Dec 5 03:55:10 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:55:10 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:55:17 localhost podman[96932]: 2025-12-05 08:55:17.208984606 +0000 UTC m=+0.092744620 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:55:17 localhost podman[96932]: 2025-12-05 08:55:17.40778882 +0000 UTC m=+0.291548804 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team) Dec 5 03:55:17 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:55:33 localhost systemd[1]: tmp-crun.hF3Sex.mount: Deactivated successfully. Dec 5 03:55:33 localhost podman[96976]: 2025-12-05 08:55:33.419841296 +0000 UTC m=+0.308261863 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, version=17.1.12, vcs-type=git, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4) Dec 5 03:55:33 localhost podman[96976]: 2025-12-05 08:55:33.454758051 +0000 UTC m=+0.343178618 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 5 03:55:33 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:55:33 localhost sshd[97002]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:55:35 localhost podman[97005]: 2025-12-05 08:55:35.218379477 +0000 UTC m=+0.089528061 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc.) Dec 5 03:55:35 localhost podman[97005]: 2025-12-05 08:55:35.232662883 +0000 UTC m=+0.103811427 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 03:55:35 localhost podman[97006]: 2025-12-05 08:55:35.27519092 +0000 UTC m=+0.143843598 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 5 03:55:35 localhost podman[97004]: 2025-12-05 08:55:35.324620798 +0000 UTC m=+0.200631551 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 5 03:55:35 localhost podman[97006]: 2025-12-05 08:55:35.332671004 +0000 UTC m=+0.201323652 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi) Dec 5 03:55:35 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:55:35 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:55:35 localhost podman[97004]: 2025-12-05 08:55:35.359845483 +0000 UTC m=+0.235856306 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:55:35 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:55:35 localhost sshd[97080]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:55:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:55:36 localhost podman[97082]: 2025-12-05 08:55:36.183629591 +0000 UTC m=+0.072580155 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 5 03:55:36 localhost podman[97082]: 2025-12-05 08:55:36.596185685 +0000 UTC m=+0.485136269 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:55:36 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:55:38 localhost sshd[97104]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:55:39 localhost sshd[97106]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:55:41 localhost podman[97109]: 2025-12-05 08:55:41.22412013 +0000 UTC m=+0.097172675 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:55:41 localhost podman[97109]: 2025-12-05 08:55:41.259573832 +0000 UTC m=+0.132626287 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:55:41 localhost systemd[1]: tmp-crun.jggFz5.mount: Deactivated successfully. Dec 5 03:55:41 localhost podman[97108]: 2025-12-05 08:55:41.276966513 +0000 UTC m=+0.151709179 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:55:41 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:55:41 localhost podman[97110]: 2025-12-05 08:55:41.32769384 +0000 UTC m=+0.196197536 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:55:41 localhost podman[97110]: 2025-12-05 08:55:41.363982827 +0000 UTC m=+0.232486493 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:55:41 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:55:41 localhost podman[97111]: 2025-12-05 08:55:41.379919812 +0000 UTC m=+0.247931463 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12) Dec 5 03:55:41 localhost podman[97111]: 2025-12-05 08:55:41.393701173 +0000 UTC m=+0.261712864 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:55:41 localhost podman[97111]: unhealthy Dec 5 03:55:41 localhost podman[97108]: 2025-12-05 08:55:41.40048805 +0000 UTC m=+0.275230706 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12) Dec 5 03:55:41 localhost podman[97108]: unhealthy Dec 5 03:55:41 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:55:41 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:55:41 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:55:41 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:55:42 localhost sshd[97187]: main: sshd: ssh-rsa algorithm is disabled Dec 5 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:55:48 localhost systemd[1]: tmp-crun.VbKjCc.mount: Deactivated successfully. Dec 5 03:55:48 localhost podman[97189]: 2025-12-05 08:55:48.211507297 +0000 UTC m=+0.100228929 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, batch=17.1_20251118.1) Dec 5 03:55:48 localhost podman[97189]: 2025-12-05 08:55:48.40731633 +0000 UTC m=+0.296037932 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:55:48 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:55:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:55:59 localhost recover_tripleo_nova_virtqemud[97219]: 61294 Dec 5 03:55:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:55:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:56:04 localhost systemd[1]: tmp-crun.ivAigi.mount: Deactivated successfully. Dec 5 03:56:04 localhost podman[97220]: 2025-12-05 08:56:04.214685843 +0000 UTC m=+0.100135326 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:56:04 localhost podman[97220]: 2025-12-05 08:56:04.247738401 +0000 UTC m=+0.133187884 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:56:04 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:56:06 localhost systemd[1]: tmp-crun.8X2D1O.mount: Deactivated successfully. Dec 5 03:56:06 localhost podman[97250]: 2025-12-05 08:56:06.226307174 +0000 UTC m=+0.102833488 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:56:06 localhost podman[97248]: 2025-12-05 08:56:06.26517526 +0000 UTC m=+0.148495851 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:56:06 localhost podman[97250]: 2025-12-05 08:56:06.289750409 +0000 UTC m=+0.166276743 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4) Dec 5 03:56:06 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:56:06 localhost podman[97248]: 2025-12-05 08:56:06.317929398 +0000 UTC m=+0.201249999 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_compute) Dec 5 03:56:06 localhost podman[97249]: 2025-12-05 08:56:06.31732108 +0000 UTC m=+0.197699031 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=logrotate_crond, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:56:06 localhost podman[97249]: 2025-12-05 08:56:06.328615344 +0000 UTC m=+0.208993225 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64) Dec 5 03:56:06 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:56:06 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:56:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:56:07 localhost podman[97321]: 2025-12-05 08:56:07.193091854 +0000 UTC m=+0.084672383 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64) Dec 5 03:56:07 localhost systemd[1]: tmp-crun.FbcTY1.mount: Deactivated successfully. Dec 5 03:56:07 localhost podman[97321]: 2025-12-05 08:56:07.601880714 +0000 UTC m=+0.493461233 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:56:07 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:56:12 localhost podman[97348]: 2025-12-05 08:56:12.212773099 +0000 UTC m=+0.087152970 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:56:12 localhost podman[97348]: 2025-12-05 08:56:12.258165563 +0000 UTC m=+0.132545424 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:56:12 localhost podman[97348]: unhealthy Dec 5 03:56:12 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:56:12 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:56:12 localhost systemd[1]: tmp-crun.KX1QHR.mount: Deactivated successfully. Dec 5 03:56:12 localhost podman[97346]: 2025-12-05 08:56:12.260532885 +0000 UTC m=+0.139702322 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:56:12 localhost podman[97347]: 2025-12-05 08:56:12.319445953 +0000 UTC m=+0.198012192 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, name=rhosp17/openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:56:12 localhost podman[97347]: 2025-12-05 08:56:12.329163168 +0000 UTC m=+0.207729427 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, container_name=collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:56:12 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:56:12 localhost podman[97346]: 2025-12-05 08:56:12.349624983 +0000 UTC m=+0.228794380 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:56:12 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:56:12 localhost podman[97345]: 2025-12-05 08:56:12.401458944 +0000 UTC m=+0.285868751 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:56:12 localhost podman[97345]: 2025-12-05 08:56:12.415367568 +0000 UTC m=+0.299777365 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 5 03:56:12 localhost podman[97345]: unhealthy Dec 5 03:56:12 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:56:12 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:56:18 localhost podman[97517]: 2025-12-05 08:56:18.485486235 +0000 UTC m=+0.103045395 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, release=1763362218, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Dec 5 03:56:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:56:18 localhost systemd[1]: tmp-crun.nlLrks.mount: Deactivated successfully. Dec 5 03:56:18 localhost podman[97537]: 2025-12-05 08:56:18.616210742 +0000 UTC m=+0.111822902 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:56:18 localhost podman[97517]: 2025-12-05 08:56:18.63614184 +0000 UTC m=+0.253700970 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 03:56:18 localhost podman[97537]: 2025-12-05 08:56:18.819781021 +0000 UTC m=+0.315393161 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, release=1761123044, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Dec 5 03:56:18 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:56:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:56:35 localhost systemd[1]: tmp-crun.0Tudak.mount: Deactivated successfully. Dec 5 03:56:35 localhost podman[97695]: 2025-12-05 08:56:35.214985603 +0000 UTC m=+0.096414993 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:56:35 localhost podman[97695]: 2025-12-05 08:56:35.252792105 +0000 UTC m=+0.134221555 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git) Dec 5 03:56:35 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:56:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:56:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:56:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:56:37 localhost systemd[1]: tmp-crun.5N6fS7.mount: Deactivated successfully. Dec 5 03:56:37 localhost podman[97723]: 2025-12-05 08:56:37.212946616 +0000 UTC m=+0.097321949 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:56:37 localhost podman[97723]: 2025-12-05 08:56:37.246016535 +0000 UTC m=+0.130391848 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12) Dec 5 03:56:37 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:56:37 localhost podman[97724]: 2025-12-05 08:56:37.26323421 +0000 UTC m=+0.143226989 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:56:37 localhost podman[97722]: 2025-12-05 08:56:37.313454792 +0000 UTC m=+0.197912278 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Dec 5 03:56:37 localhost podman[97724]: 2025-12-05 08:56:37.3415828 +0000 UTC m=+0.221575589 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:56:37 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:56:37 localhost podman[97722]: 2025-12-05 08:56:37.371785812 +0000 UTC m=+0.256243318 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com) Dec 5 03:56:37 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:56:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:56:38 localhost podman[97794]: 2025-12-05 08:56:38.195395924 +0000 UTC m=+0.080663952 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true) Dec 5 03:56:38 localhost podman[97794]: 2025-12-05 08:56:38.507811283 +0000 UTC m=+0.393079231 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Dec 5 03:56:38 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:56:43 localhost systemd[1]: tmp-crun.KT5dOO.mount: Deactivated successfully. Dec 5 03:56:43 localhost podman[97820]: 2025-12-05 08:56:43.212982556 +0000 UTC m=+0.088501471 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:56:43 localhost podman[97820]: 2025-12-05 08:56:43.226357543 +0000 UTC m=+0.101876428 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git) Dec 5 03:56:43 localhost podman[97820]: unhealthy Dec 5 03:56:43 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:56:43 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:56:43 localhost podman[97819]: 2025-12-05 08:56:43.274071629 +0000 UTC m=+0.150634446 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, container_name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 5 03:56:43 localhost podman[97819]: 2025-12-05 08:56:43.353419869 +0000 UTC m=+0.229982716 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:56:43 localhost podman[97817]: 2025-12-05 08:56:43.353680957 +0000 UTC m=+0.237870077 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:56:43 localhost podman[97817]: 2025-12-05 08:56:43.372622925 +0000 UTC m=+0.256812045 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Dec 5 03:56:43 localhost podman[97817]: unhealthy Dec 5 03:56:43 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:56:43 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:56:43 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:56:43 localhost podman[97818]: 2025-12-05 08:56:43.457845624 +0000 UTC m=+0.337130954 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 5 03:56:43 localhost podman[97818]: 2025-12-05 08:56:43.465144937 +0000 UTC m=+0.344430237 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 03:56:43 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:56:44 localhost systemd[1]: tmp-crun.NQyfka.mount: Deactivated successfully. Dec 5 03:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:56:49 localhost systemd[1]: tmp-crun.B4uikg.mount: Deactivated successfully. Dec 5 03:56:49 localhost podman[97895]: 2025-12-05 08:56:49.201862494 +0000 UTC m=+0.089720648 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 5 03:56:49 localhost podman[97895]: 2025-12-05 08:56:49.403576767 +0000 UTC m=+0.291434861 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:56:49 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:57:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:57:06 localhost podman[97925]: 2025-12-05 08:57:06.180521094 +0000 UTC m=+0.064576431 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:57:06 localhost podman[97925]: 2025-12-05 08:57:06.208677432 +0000 UTC m=+0.092732750 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team) Dec 5 03:57:06 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:57:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:57:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:57:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:57:08 localhost systemd[1]: tmp-crun.gPaCV0.mount: Deactivated successfully. Dec 5 03:57:08 localhost podman[97951]: 2025-12-05 08:57:08.202224652 +0000 UTC m=+0.093176793 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:57:08 localhost podman[97952]: 2025-12-05 08:57:08.256457925 +0000 UTC m=+0.144238150 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 5 03:57:08 localhost podman[97952]: 2025-12-05 08:57:08.268510134 +0000 UTC m=+0.156290269 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:57:08 localhost podman[97953]: 2025-12-05 08:57:08.224438109 +0000 UTC m=+0.107664575 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:57:08 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:57:08 localhost podman[97953]: 2025-12-05 08:57:08.309936337 +0000 UTC m=+0.193162813 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, release=1761123044, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 5 03:57:08 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:57:08 localhost podman[97951]: 2025-12-05 08:57:08.327453832 +0000 UTC m=+0.218406023 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public) Dec 5 03:57:08 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:57:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:57:09 localhost systemd[1]: tmp-crun.ms882u.mount: Deactivated successfully. Dec 5 03:57:09 localhost systemd[1]: tmp-crun.2Bqw3E.mount: Deactivated successfully. Dec 5 03:57:09 localhost podman[98022]: 2025-12-05 08:57:09.203695379 +0000 UTC m=+0.088640675 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 03:57:09 localhost podman[98022]: 2025-12-05 08:57:09.595850361 +0000 UTC m=+0.480795727 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target) Dec 5 03:57:09 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:57:14 localhost podman[98049]: 2025-12-05 08:57:14.205876841 +0000 UTC m=+0.084257401 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 5 03:57:14 localhost podman[98049]: 2025-12-05 08:57:14.220396384 +0000 UTC m=+0.098776954 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:57:14 localhost podman[98049]: unhealthy Dec 5 03:57:14 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:57:14 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:57:14 localhost systemd[1]: tmp-crun.bWJ0qL.mount: Deactivated successfully. Dec 5 03:57:14 localhost podman[98047]: 2025-12-05 08:57:14.278160836 +0000 UTC m=+0.157154205 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, distribution-scope=public) Dec 5 03:57:14 localhost podman[98046]: 2025-12-05 08:57:14.307003635 +0000 UTC m=+0.189614424 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 5 03:57:14 localhost podman[98046]: 2025-12-05 08:57:14.346012985 +0000 UTC m=+0.228623814 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Dec 5 03:57:14 localhost podman[98046]: unhealthy Dec 5 03:57:14 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:57:14 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:57:14 localhost podman[98047]: 2025-12-05 08:57:14.364815859 +0000 UTC m=+0.243809258 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=iscsid) Dec 5 03:57:14 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:57:14 localhost podman[98048]: 2025-12-05 08:57:14.422940092 +0000 UTC m=+0.300828647 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 5 03:57:14 localhost podman[98048]: 2025-12-05 08:57:14.435538286 +0000 UTC m=+0.313426901 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 5 03:57:14 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:57:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:57:15 localhost recover_tripleo_nova_virtqemud[98125]: 61294 Dec 5 03:57:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:57:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:57:20 localhost podman[98127]: 2025-12-05 08:57:20.209840821 +0000 UTC m=+0.087305644 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:57:20 localhost podman[98127]: 2025-12-05 08:57:20.487612693 +0000 UTC m=+0.365077426 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, architecture=x86_64, io.openshift.expose-services=) Dec 5 03:57:20 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:57:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:57:37 localhost podman[98233]: 2025-12-05 08:57:37.202773004 +0000 UTC m=+0.084780596 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 03:57:37 localhost podman[98233]: 2025-12-05 08:57:37.231614114 +0000 UTC m=+0.113621636 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step5, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:57:37 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:57:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:57:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:57:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:57:39 localhost podman[98261]: 2025-12-05 08:57:39.20933026 +0000 UTC m=+0.090176081 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 5 03:57:39 localhost podman[98261]: 2025-12-05 08:57:39.247655209 +0000 UTC m=+0.128501040 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 5 03:57:39 localhost podman[98262]: 2025-12-05 08:57:39.262831103 +0000 UTC m=+0.139324431 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true) Dec 5 03:57:39 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:57:39 localhost podman[98260]: 2025-12-05 08:57:39.32209361 +0000 UTC m=+0.205529260 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute) Dec 5 03:57:39 localhost podman[98262]: 2025-12-05 08:57:39.374075725 +0000 UTC m=+0.250569094 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git) Dec 5 03:57:39 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:57:39 localhost podman[98260]: 2025-12-05 08:57:39.432280241 +0000 UTC m=+0.315715831 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible) Dec 5 03:57:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:57:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:57:40 localhost podman[98333]: 2025-12-05 08:57:40.196667477 +0000 UTC m=+0.079880308 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 5 03:57:40 localhost podman[98333]: 2025-12-05 08:57:40.572772909 +0000 UTC m=+0.455985680 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, version=17.1.12, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 5 03:57:40 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:57:45 localhost podman[98357]: 2025-12-05 08:57:45.223265073 +0000 UTC m=+0.097175254 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z) Dec 5 03:57:45 localhost podman[98357]: 2025-12-05 08:57:45.237508017 +0000 UTC m=+0.111418148 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, distribution-scope=public) Dec 5 03:57:45 localhost podman[98357]: unhealthy Dec 5 03:57:45 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:57:45 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:57:45 localhost systemd[1]: tmp-crun.nwpGAK.mount: Deactivated successfully. Dec 5 03:57:45 localhost podman[98359]: 2025-12-05 08:57:45.323907573 +0000 UTC m=+0.189873763 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team) Dec 5 03:57:45 localhost podman[98365]: 2025-12-05 08:57:45.384408219 +0000 UTC m=+0.247030156 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vcs-type=git, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 03:57:45 localhost podman[98359]: 2025-12-05 08:57:45.408170183 +0000 UTC m=+0.274136343 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z) Dec 5 03:57:45 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:57:45 localhost podman[98365]: 2025-12-05 08:57:45.420637234 +0000 UTC m=+0.283259161 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team) Dec 5 03:57:45 localhost podman[98365]: unhealthy Dec 5 03:57:45 localhost podman[98358]: 2025-12-05 08:57:45.427866404 +0000 UTC m=+0.299807486 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:57:45 localhost podman[98358]: 2025-12-05 08:57:45.434080163 +0000 UTC m=+0.306021265 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4) Dec 5 03:57:45 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:57:45 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:57:45 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:57:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:57:51 localhost podman[98435]: 2025-12-05 08:57:51.203969263 +0000 UTC m=+0.089946084 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, config_id=tripleo_step1) Dec 5 03:57:51 localhost podman[98435]: 2025-12-05 08:57:51.403553921 +0000 UTC m=+0.289530692 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:57:51 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:58:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:58:08 localhost podman[98465]: 2025-12-05 08:58:08.203438638 +0000 UTC m=+0.085578020 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:58:08 localhost podman[98465]: 2025-12-05 08:58:08.26479937 +0000 UTC m=+0.146938692 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=) Dec 5 03:58:08 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:58:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:58:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:58:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:58:10 localhost systemd[1]: tmp-crun.TAm01m.mount: Deactivated successfully. Dec 5 03:58:10 localhost podman[98491]: 2025-12-05 08:58:10.211225582 +0000 UTC m=+0.094580426 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4) Dec 5 03:58:10 localhost systemd[1]: tmp-crun.WZzFCF.mount: Deactivated successfully. Dec 5 03:58:10 localhost podman[98491]: 2025-12-05 08:58:10.259455703 +0000 UTC m=+0.142810577 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, distribution-scope=public, version=17.1.12, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, release=1761123044, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 03:58:10 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:58:10 localhost podman[98492]: 2025-12-05 08:58:10.260112384 +0000 UTC m=+0.138332721 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-cron) Dec 5 03:58:10 localhost podman[98493]: 2025-12-05 08:58:10.320084303 +0000 UTC m=+0.192293087 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:58:10 localhost podman[98492]: 2025-12-05 08:58:10.343230279 +0000 UTC m=+0.221450616 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container) Dec 5 03:58:10 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:58:10 localhost podman[98493]: 2025-12-05 08:58:10.402317171 +0000 UTC m=+0.274525945 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:58:10 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:58:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:58:11 localhost podman[98565]: 2025-12-05 08:58:11.206416209 +0000 UTC m=+0.086068886 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12) Dec 5 03:58:11 localhost podman[98565]: 2025-12-05 08:58:11.613796666 +0000 UTC m=+0.493449363 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git) Dec 5 03:58:11 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:58:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:58:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:58:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:58:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:58:16 localhost podman[98591]: 2025-12-05 08:58:16.231100398 +0000 UTC m=+0.102526008 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, vcs-type=git) Dec 5 03:58:16 localhost podman[98588]: 2025-12-05 08:58:16.202054833 +0000 UTC m=+0.084566842 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 5 03:58:16 localhost podman[98589]: 2025-12-05 08:58:16.270708676 +0000 UTC m=+0.148558202 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:58:16 localhost podman[98589]: 2025-12-05 08:58:16.28264478 +0000 UTC m=+0.160494316 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid) Dec 5 03:58:16 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:58:16 localhost podman[98591]: 2025-12-05 08:58:16.30098708 +0000 UTC m=+0.172412720 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, distribution-scope=public, container_name=ovn_controller) Dec 5 03:58:16 localhost podman[98591]: unhealthy Dec 5 03:58:16 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:58:16 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:58:16 localhost podman[98588]: 2025-12-05 08:58:16.333350387 +0000 UTC m=+0.215862396 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:58:16 localhost podman[98588]: unhealthy Dec 5 03:58:16 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:58:16 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:58:16 localhost podman[98590]: 2025-12-05 08:58:16.421354072 +0000 UTC m=+0.295737173 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:58:16 localhost podman[98590]: 2025-12-05 08:58:16.434587466 +0000 UTC m=+0.308970597 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-collectd) Dec 5 03:58:16 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:58:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:58:22 localhost systemd[1]: tmp-crun.I2rTc0.mount: Deactivated successfully. Dec 5 03:58:22 localhost podman[98680]: 2025-12-05 08:58:22.125332531 +0000 UTC m=+0.103764696 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 03:58:22 localhost podman[98680]: 2025-12-05 08:58:22.323709722 +0000 UTC m=+0.302141897 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Dec 5 03:58:22 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:58:39 localhost podman[98772]: 2025-12-05 08:58:39.206787506 +0000 UTC m=+0.084850469 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1) Dec 5 03:58:39 localhost podman[98772]: 2025-12-05 08:58:39.240557667 +0000 UTC m=+0.118620660 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:58:39 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:58:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:58:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:58:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:58:41 localhost podman[98799]: 2025-12-05 08:58:41.196437248 +0000 UTC m=+0.082154918 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1) Dec 5 03:58:41 localhost podman[98799]: 2025-12-05 08:58:41.2089916 +0000 UTC m=+0.094709260 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 5 03:58:41 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:58:41 localhost systemd[1]: tmp-crun.4Xqke1.mount: Deactivated successfully. Dec 5 03:58:41 localhost podman[98800]: 2025-12-05 08:58:41.308903558 +0000 UTC m=+0.190580624 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 03:58:41 localhost podman[98800]: 2025-12-05 08:58:41.342015268 +0000 UTC m=+0.223692324 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:58:41 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:58:41 localhost podman[98798]: 2025-12-05 08:58:41.360707318 +0000 UTC m=+0.246843021 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, container_name=ceilometer_agent_compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:58:41 localhost podman[98798]: 2025-12-05 08:58:41.414768137 +0000 UTC m=+0.300903870 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 03:58:41 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:58:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:58:42 localhost systemd[1]: tmp-crun.nuCrOP.mount: Deactivated successfully. Dec 5 03:58:42 localhost podman[98874]: 2025-12-05 08:58:42.201740223 +0000 UTC m=+0.086095438 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute) Dec 5 03:58:42 localhost podman[98874]: 2025-12-05 08:58:42.547309073 +0000 UTC m=+0.431664308 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 03:58:42 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:58:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:58:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:58:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:58:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:58:47 localhost systemd[1]: tmp-crun.CmdLkJ.mount: Deactivated successfully. Dec 5 03:58:47 localhost systemd[1]: tmp-crun.eTPGz9.mount: Deactivated successfully. Dec 5 03:58:47 localhost podman[98900]: 2025-12-05 08:58:47.287882584 +0000 UTC m=+0.149244064 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ovn_controller, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, build-date=2025-11-18T23:34:05Z) Dec 5 03:58:47 localhost podman[98899]: 2025-12-05 08:58:47.245203482 +0000 UTC m=+0.107006165 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 5 03:58:47 localhost podman[98900]: 2025-12-05 08:58:47.330474873 +0000 UTC m=+0.191836343 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 03:58:47 localhost podman[98900]: unhealthy Dec 5 03:58:47 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:58:47 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:58:47 localhost podman[98898]: 2025-12-05 08:58:47.377030183 +0000 UTC m=+0.243030154 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 5 03:58:47 localhost podman[98898]: 2025-12-05 08:58:47.387850653 +0000 UTC m=+0.253850634 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:58:47 localhost podman[98897]: 2025-12-05 08:58:47.335989411 +0000 UTC m=+0.206283213 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:58:47 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:58:47 localhost podman[98899]: 2025-12-05 08:58:47.424619745 +0000 UTC m=+0.286422418 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Dec 5 03:58:47 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:58:47 localhost podman[98897]: 2025-12-05 08:58:47.469552425 +0000 UTC m=+0.339846217 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 03:58:47 localhost podman[98897]: unhealthy Dec 5 03:58:47 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:58:47 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:58:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:58:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 03:58:53 localhost recover_tripleo_nova_virtqemud[98982]: 61294 Dec 5 03:58:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 03:58:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 03:58:53 localhost systemd[1]: tmp-crun.QXIQdl.mount: Deactivated successfully. Dec 5 03:58:53 localhost podman[98975]: 2025-12-05 08:58:53.222993692 +0000 UTC m=+0.107862811 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Dec 5 03:58:53 localhost podman[98975]: 2025-12-05 08:58:53.437787233 +0000 UTC m=+0.322656322 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:58:53 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:58:56 localhost systemd[1]: session-28.scope: Deactivated successfully. Dec 5 03:58:56 localhost systemd[1]: session-28.scope: Consumed 6min 49.510s CPU time. Dec 5 03:58:56 localhost systemd-logind[760]: Session 28 logged out. Waiting for processes to exit. Dec 5 03:58:56 localhost systemd-logind[760]: Removed session 28. Dec 5 03:59:07 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 5 03:59:07 localhost systemd[35774]: Activating special unit Exit the Session... Dec 5 03:59:07 localhost systemd[35774]: Removed slice User Background Tasks Slice. Dec 5 03:59:07 localhost systemd[35774]: Stopped target Main User Target. Dec 5 03:59:07 localhost systemd[35774]: Stopped target Basic System. Dec 5 03:59:07 localhost systemd[35774]: Stopped target Paths. Dec 5 03:59:07 localhost systemd[35774]: Stopped target Sockets. Dec 5 03:59:07 localhost systemd[35774]: Stopped target Timers. Dec 5 03:59:07 localhost systemd[35774]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 5 03:59:07 localhost systemd[35774]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 03:59:07 localhost systemd[35774]: Closed D-Bus User Message Bus Socket. Dec 5 03:59:07 localhost systemd[35774]: Stopped Create User's Volatile Files and Directories. Dec 5 03:59:07 localhost systemd[35774]: Removed slice User Application Slice. Dec 5 03:59:07 localhost systemd[35774]: Reached target Shutdown. Dec 5 03:59:07 localhost systemd[35774]: Finished Exit the Session. Dec 5 03:59:07 localhost systemd[35774]: Reached target Exit the Session. Dec 5 03:59:07 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 5 03:59:07 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 5 03:59:07 localhost systemd[1]: user@1003.service: Consumed 4.865s CPU time, read 0B from disk, written 7.0K to disk. Dec 5 03:59:07 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 5 03:59:07 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 5 03:59:07 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 5 03:59:07 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 5 03:59:07 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 5 03:59:07 localhost systemd[1]: user-1003.slice: Consumed 6min 54.407s CPU time. Dec 5 03:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:59:10 localhost podman[99008]: 2025-12-05 08:59:10.208038378 +0000 UTC m=+0.090251434 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public) Dec 5 03:59:10 localhost podman[99008]: 2025-12-05 08:59:10.243752658 +0000 UTC m=+0.125965744 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public) Dec 5 03:59:10 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:59:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:59:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:59:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:59:12 localhost podman[99033]: 2025-12-05 08:59:12.2115135 +0000 UTC m=+0.094785723 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Dec 5 03:59:12 localhost podman[99033]: 2025-12-05 08:59:12.245316471 +0000 UTC m=+0.128588734 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 03:59:12 localhost podman[99034]: 2025-12-05 08:59:12.255389728 +0000 UTC m=+0.136284698 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 5 03:59:12 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:59:12 localhost podman[99034]: 2025-12-05 08:59:12.268809917 +0000 UTC m=+0.149704897 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 5 03:59:12 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:59:12 localhost podman[99035]: 2025-12-05 08:59:12.306488807 +0000 UTC m=+0.184296803 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team) Dec 5 03:59:12 localhost podman[99035]: 2025-12-05 08:59:12.362594388 +0000 UTC m=+0.240402384 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:12:45Z, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:59:12 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:59:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:59:13 localhost podman[99104]: 2025-12-05 08:59:13.194162793 +0000 UTC m=+0.081461945 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Dec 5 03:59:13 localhost podman[99104]: 2025-12-05 08:59:13.595823645 +0000 UTC m=+0.483122767 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 5 03:59:13 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:59:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:59:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:59:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:59:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:59:18 localhost podman[99126]: 2025-12-05 08:59:18.213080744 +0000 UTC m=+0.087237951 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-type=git, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, name=rhosp17/openstack-iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:59:18 localhost podman[99126]: 2025-12-05 08:59:18.249236827 +0000 UTC m=+0.123394044 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, container_name=iscsid, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 03:59:18 localhost systemd[1]: tmp-crun.WY77Wb.mount: Deactivated successfully. Dec 5 03:59:18 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:59:18 localhost podman[99127]: 2025-12-05 08:59:18.269666291 +0000 UTC m=+0.142342273 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-type=git) Dec 5 03:59:18 localhost podman[99127]: 2025-12-05 08:59:18.305128252 +0000 UTC m=+0.177804244 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-type=git) Dec 5 03:59:18 localhost podman[99128]: 2025-12-05 08:59:18.321411129 +0000 UTC m=+0.190258294 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Dec 5 03:59:18 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:59:18 localhost podman[99125]: 2025-12-05 08:59:18.372192388 +0000 UTC m=+0.246254933 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, vcs-type=git, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 03:59:18 localhost podman[99128]: 2025-12-05 08:59:18.391555949 +0000 UTC m=+0.260403174 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public) Dec 5 03:59:18 localhost podman[99128]: unhealthy Dec 5 03:59:18 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:59:18 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:59:18 localhost podman[99125]: 2025-12-05 08:59:18.417702856 +0000 UTC m=+0.291765471 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com) Dec 5 03:59:18 localhost podman[99125]: unhealthy Dec 5 03:59:18 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:59:18 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:59:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:59:23 localhost podman[99218]: 2025-12-05 08:59:23.734270956 +0000 UTC m=+0.100146615 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:59:23 localhost podman[99218]: 2025-12-05 08:59:23.93478067 +0000 UTC m=+0.300656299 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:46Z) Dec 5 03:59:23 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 03:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 03:59:41 localhost podman[99310]: 2025-12-05 08:59:41.197069427 +0000 UTC m=+0.082758965 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute) Dec 5 03:59:41 localhost podman[99310]: 2025-12-05 08:59:41.23129695 +0000 UTC m=+0.116986458 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z) Dec 5 03:59:41 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 03:59:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 03:59:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 03:59:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 03:59:43 localhost systemd[1]: tmp-crun.S0ov9m.mount: Deactivated successfully. Dec 5 03:59:43 localhost podman[99336]: 2025-12-05 08:59:43.204853657 +0000 UTC m=+0.091890483 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z) Dec 5 03:59:43 localhost podman[99338]: 2025-12-05 08:59:43.269808858 +0000 UTC m=+0.151324805 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, tcib_managed=true, release=1761123044, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 03:59:43 localhost podman[99336]: 2025-12-05 08:59:43.259790772 +0000 UTC m=+0.146827658 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 03:59:43 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 03:59:43 localhost podman[99337]: 2025-12-05 08:59:43.40895793 +0000 UTC m=+0.292630913 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond) Dec 5 03:59:43 localhost podman[99337]: 2025-12-05 08:59:43.422295977 +0000 UTC m=+0.305968940 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 03:59:43 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 03:59:43 localhost podman[99338]: 2025-12-05 08:59:43.476755548 +0000 UTC m=+0.358271465 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:59:43 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 03:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 03:59:44 localhost systemd[1]: tmp-crun.Mt2EDg.mount: Deactivated successfully. Dec 5 03:59:44 localhost podman[99406]: 2025-12-05 08:59:44.208298913 +0000 UTC m=+0.089437868 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Dec 5 03:59:44 localhost podman[99406]: 2025-12-05 08:59:44.531831068 +0000 UTC m=+0.412970033 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 03:59:44 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 03:59:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 03:59:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 03:59:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 03:59:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 03:59:49 localhost podman[99429]: 2025-12-05 08:59:49.205389261 +0000 UTC m=+0.089054976 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, architecture=x86_64, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 5 03:59:49 localhost podman[99429]: 2025-12-05 08:59:49.215694255 +0000 UTC m=+0.099359900 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container) Dec 5 03:59:49 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 03:59:49 localhost systemd[1]: tmp-crun.hTgkko.mount: Deactivated successfully. Dec 5 03:59:49 localhost podman[99428]: 2025-12-05 08:59:49.310908788 +0000 UTC m=+0.194949685 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 5 03:59:49 localhost podman[99431]: 2025-12-05 08:59:49.27848029 +0000 UTC m=+0.154269975 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ovn_controller, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 5 03:59:49 localhost podman[99428]: 2025-12-05 08:59:49.350431753 +0000 UTC m=+0.234472650 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4) Dec 5 03:59:49 localhost podman[99428]: unhealthy Dec 5 03:59:49 localhost podman[99430]: 2025-12-05 08:59:49.358215691 +0000 UTC m=+0.238913666 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, container_name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Dec 5 03:59:49 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:59:49 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 03:59:49 localhost podman[99430]: 2025-12-05 08:59:49.369421012 +0000 UTC m=+0.250118967 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-collectd, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 03:59:49 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 03:59:49 localhost podman[99431]: 2025-12-05 08:59:49.411427463 +0000 UTC m=+0.287217198 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 5 03:59:49 localhost podman[99431]: unhealthy Dec 5 03:59:49 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 03:59:49 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 03:59:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 03:59:54 localhost podman[99505]: 2025-12-05 08:59:54.215391192 +0000 UTC m=+0.096643558 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 03:59:54 localhost podman[99505]: 2025-12-05 08:59:54.460841127 +0000 UTC m=+0.342093443 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 03:59:54 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:00:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:00:09 localhost recover_tripleo_nova_virtqemud[99543]: 61294 Dec 5 04:00:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:00:09 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:00:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:00:12 localhost systemd[1]: tmp-crun.oMzDSG.mount: Deactivated successfully. Dec 5 04:00:12 localhost podman[99545]: 2025-12-05 09:00:12.182986184 +0000 UTC m=+0.069346585 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step5, release=1761123044, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:00:12 localhost podman[99545]: 2025-12-05 09:00:12.233737492 +0000 UTC m=+0.120097873 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:00:12 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:00:14 localhost systemd[1]: tmp-crun.GNqFiQ.mount: Deactivated successfully. Dec 5 04:00:14 localhost podman[99572]: 2025-12-05 09:00:14.217028334 +0000 UTC m=+0.094708029 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64) Dec 5 04:00:14 localhost podman[99572]: 2025-12-05 09:00:14.253650221 +0000 UTC m=+0.131329916 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Dec 5 04:00:14 localhost podman[99573]: 2025-12-05 09:00:14.268987048 +0000 UTC m=+0.140616918 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 04:00:14 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:00:14 localhost podman[99573]: 2025-12-05 09:00:14.296164977 +0000 UTC m=+0.167794817 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_ipmi, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:00:14 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:00:14 localhost podman[99571]: 2025-12-05 09:00:14.317760346 +0000 UTC m=+0.196915566 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:00:14 localhost podman[99571]: 2025-12-05 09:00:14.35167214 +0000 UTC m=+0.230827400 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4) Dec 5 04:00:14 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:00:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:00:15 localhost podman[99644]: 2025-12-05 09:00:15.211999292 +0000 UTC m=+0.095226565 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public) Dec 5 04:00:15 localhost podman[99644]: 2025-12-05 09:00:15.608845803 +0000 UTC m=+0.492073056 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container) Dec 5 04:00:15 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:00:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:00:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:00:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:00:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:00:20 localhost systemd[1]: tmp-crun.hgka6E.mount: Deactivated successfully. Dec 5 04:00:20 localhost podman[99670]: 2025-12-05 09:00:20.220720092 +0000 UTC m=+0.103306061 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3) Dec 5 04:00:20 localhost podman[99670]: 2025-12-05 09:00:20.255189333 +0000 UTC m=+0.137775362 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64) Dec 5 04:00:20 localhost podman[99671]: 2025-12-05 09:00:20.268640353 +0000 UTC m=+0.148292722 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, release=1761123044) Dec 5 04:00:20 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:00:20 localhost podman[99671]: 2025-12-05 09:00:20.301184856 +0000 UTC m=+0.180837255 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3) Dec 5 04:00:20 localhost podman[99669]: 2025-12-05 09:00:20.315231234 +0000 UTC m=+0.201848005 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 5 04:00:20 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:00:20 localhost podman[99669]: 2025-12-05 09:00:20.329667844 +0000 UTC m=+0.216284695 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 04:00:20 localhost podman[99669]: unhealthy Dec 5 04:00:20 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:00:20 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:00:20 localhost podman[99672]: 2025-12-05 09:00:20.384663591 +0000 UTC m=+0.258811182 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12) Dec 5 04:00:20 localhost podman[99672]: 2025-12-05 09:00:20.422840185 +0000 UTC m=+0.296987786 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-type=git) Dec 5 04:00:20 localhost podman[99672]: unhealthy Dec 5 04:00:20 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:00:20 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:00:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:00:25 localhost podman[99749]: 2025-12-05 09:00:25.203495474 +0000 UTC m=+0.089766409 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public) Dec 5 04:00:25 localhost podman[99749]: 2025-12-05 09:00:25.394736434 +0000 UTC m=+0.281007319 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 5 04:00:25 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:00:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:00:43 localhost podman[99855]: 2025-12-05 09:00:43.215135418 +0000 UTC m=+0.090686025 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12) Dec 5 04:00:43 localhost podman[99855]: 2025-12-05 09:00:43.250742604 +0000 UTC m=+0.126293201 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc.) Dec 5 04:00:43 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:00:45 localhost podman[99880]: 2025-12-05 09:00:45.201982041 +0000 UTC m=+0.089693976 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:00:45 localhost podman[99882]: 2025-12-05 09:00:45.251608424 +0000 UTC m=+0.132729498 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc.) Dec 5 04:00:45 localhost podman[99880]: 2025-12-05 09:00:45.282573108 +0000 UTC m=+0.170285073 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 5 04:00:45 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:00:45 localhost podman[99882]: 2025-12-05 09:00:45.308661593 +0000 UTC m=+0.189782637 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc.) Dec 5 04:00:45 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:00:45 localhost podman[99881]: 2025-12-05 09:00:45.357376609 +0000 UTC m=+0.241239996 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-type=git) Dec 5 04:00:45 localhost podman[99881]: 2025-12-05 09:00:45.393706527 +0000 UTC m=+0.277570024 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-cron, config_id=tripleo_step4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 04:00:45 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:00:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:00:46 localhost podman[99954]: 2025-12-05 09:00:46.237881857 +0000 UTC m=+0.119534646 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:00:46 localhost podman[99954]: 2025-12-05 09:00:46.625975361 +0000 UTC m=+0.507628060 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com) Dec 5 04:00:46 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:00:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:00:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:00:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:00:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:00:51 localhost podman[99985]: 2025-12-05 09:00:51.196608805 +0000 UTC m=+0.068941643 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 5 04:00:51 localhost podman[99985]: 2025-12-05 09:00:51.21151118 +0000 UTC m=+0.083844038 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 04:00:51 localhost podman[99985]: unhealthy Dec 5 04:00:51 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:00:51 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:00:51 localhost systemd[1]: tmp-crun.onG5Vv.mount: Deactivated successfully. Dec 5 04:00:51 localhost podman[99979]: 2025-12-05 09:00:51.316639295 +0000 UTC m=+0.190933163 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, version=17.1.12, container_name=collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=) Dec 5 04:00:51 localhost podman[99977]: 2025-12-05 09:00:51.27154791 +0000 UTC m=+0.154662296 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible) Dec 5 04:00:51 localhost podman[99979]: 2025-12-05 09:00:51.35091219 +0000 UTC m=+0.225206038 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, tcib_managed=true, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 5 04:00:51 localhost podman[99978]: 2025-12-05 09:00:51.359155641 +0000 UTC m=+0.238018038 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 04:00:51 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:00:51 localhost podman[99978]: 2025-12-05 09:00:51.398783189 +0000 UTC m=+0.277645536 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, distribution-scope=public, url=https://www.redhat.com, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 5 04:00:51 localhost podman[99977]: 2025-12-05 09:00:51.407419222 +0000 UTC m=+0.290533628 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 5 04:00:51 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:00:51 localhost podman[99977]: unhealthy Dec 5 04:00:51 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:00:51 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:00:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:00:56 localhost systemd[1]: tmp-crun.2EDE0i.mount: Deactivated successfully. Dec 5 04:00:56 localhost podman[100057]: 2025-12-05 09:00:56.189955248 +0000 UTC m=+0.079606108 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, release=1761123044, url=https://www.redhat.com) Dec 5 04:00:56 localhost podman[100057]: 2025-12-05 09:00:56.395168336 +0000 UTC m=+0.284819236 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 5 04:00:56 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:01:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:01:14 localhost podman[100112]: 2025-12-05 09:01:14.186733093 +0000 UTC m=+0.075243585 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 5 04:01:14 localhost podman[100112]: 2025-12-05 09:01:14.217590104 +0000 UTC m=+0.106100586 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 5 04:01:14 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:01:16 localhost podman[100138]: 2025-12-05 09:01:16.204792035 +0000 UTC m=+0.088672174 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 04:01:16 localhost podman[100138]: 2025-12-05 09:01:16.236811572 +0000 UTC m=+0.120691931 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:01:16 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:01:16 localhost podman[100139]: 2025-12-05 09:01:16.320969968 +0000 UTC m=+0.199350749 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 04:01:16 localhost podman[100139]: 2025-12-05 09:01:16.334841841 +0000 UTC m=+0.213222662 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z) Dec 5 04:01:16 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:01:16 localhost podman[100140]: 2025-12-05 09:01:16.413240762 +0000 UTC m=+0.289965153 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044) Dec 5 04:01:16 localhost podman[100140]: 2025-12-05 09:01:16.471701255 +0000 UTC m=+0.348425636 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:01:16 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:01:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:01:17 localhost podman[100211]: 2025-12-05 09:01:17.203701945 +0000 UTC m=+0.087096647 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, container_name=nova_migration_target, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible) Dec 5 04:01:17 localhost podman[100211]: 2025-12-05 09:01:17.599671008 +0000 UTC m=+0.483065690 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, release=1761123044, batch=17.1_20251118.1, container_name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 5 04:01:17 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:01:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:01:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:01:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:01:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:01:22 localhost podman[100233]: 2025-12-05 09:01:22.200565086 +0000 UTC m=+0.085219470 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 5 04:01:22 localhost podman[100234]: 2025-12-05 09:01:22.257583574 +0000 UTC m=+0.138623518 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, build-date=2025-11-18T23:44:13Z, architecture=x86_64, container_name=iscsid, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com) Dec 5 04:01:22 localhost podman[100233]: 2025-12-05 09:01:22.270335423 +0000 UTC m=+0.154989837 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 5 04:01:22 localhost podman[100233]: unhealthy Dec 5 04:01:22 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:01:22 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:01:22 localhost podman[100238]: 2025-12-05 09:01:22.321439612 +0000 UTC m=+0.191566893 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 04:01:22 localhost podman[100235]: 2025-12-05 09:01:22.369363563 +0000 UTC m=+0.244582710 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12) Dec 5 04:01:22 localhost podman[100238]: 2025-12-05 09:01:22.393119577 +0000 UTC m=+0.263246808 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Dec 5 04:01:22 localhost podman[100238]: unhealthy Dec 5 04:01:22 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:01:22 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:01:22 localhost podman[100235]: 2025-12-05 09:01:22.407746592 +0000 UTC m=+0.282965729 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Dec 5 04:01:22 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:01:22 localhost podman[100234]: 2025-12-05 09:01:22.448181006 +0000 UTC m=+0.329220990 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, distribution-scope=public) Dec 5 04:01:22 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:01:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:01:26 localhost systemd[1]: tmp-crun.k3W6j9.mount: Deactivated successfully. Dec 5 04:01:26 localhost podman[100326]: 2025-12-05 09:01:26.825686541 +0000 UTC m=+0.087923692 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=) Dec 5 04:01:27 localhost podman[100326]: 2025-12-05 09:01:27.020507041 +0000 UTC m=+0.282744212 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 5 04:01:27 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:01:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:01:45 localhost podman[100416]: 2025-12-05 09:01:45.210158961 +0000 UTC m=+0.094258584 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Dec 5 04:01:45 localhost podman[100416]: 2025-12-05 09:01:45.243727405 +0000 UTC m=+0.127827028 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step5, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 5 04:01:45 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:01:47 localhost systemd[1]: tmp-crun.I11EFB.mount: Deactivated successfully. Dec 5 04:01:47 localhost podman[100444]: 2025-12-05 09:01:47.213465965 +0000 UTC m=+0.090247873 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, release=1761123044, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 04:01:47 localhost podman[100444]: 2025-12-05 09:01:47.244596224 +0000 UTC m=+0.121378132 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:01:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:01:47 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:01:47 localhost podman[100443]: 2025-12-05 09:01:47.269464912 +0000 UTC m=+0.149436217 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-type=git, release=1761123044, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:01:47 localhost recover_tripleo_nova_virtqemud[100498]: 61294 Dec 5 04:01:47 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:01:47 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:01:47 localhost podman[100443]: 2025-12-05 09:01:47.306587975 +0000 UTC m=+0.186559330 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64) Dec 5 04:01:47 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:01:47 localhost podman[100442]: 2025-12-05 09:01:47.326352017 +0000 UTC m=+0.208646713 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 04:01:47 localhost podman[100442]: 2025-12-05 09:01:47.358679453 +0000 UTC m=+0.240974129 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 5 04:01:47 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:01:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:01:48 localhost podman[100514]: 2025-12-05 09:01:48.192215638 +0000 UTC m=+0.078827894 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044) Dec 5 04:01:48 localhost podman[100514]: 2025-12-05 09:01:48.570915245 +0000 UTC m=+0.457527541 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z) Dec 5 04:01:48 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:01:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:01:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:01:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:01:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:01:53 localhost systemd[1]: tmp-crun.bUlATN.mount: Deactivated successfully. Dec 5 04:01:53 localhost systemd[1]: tmp-crun.DMevNr.mount: Deactivated successfully. Dec 5 04:01:53 localhost podman[100538]: 2025-12-05 09:01:53.183145376 +0000 UTC m=+0.063220158 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T23:44:13Z, container_name=iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Dec 5 04:01:53 localhost podman[100540]: 2025-12-05 09:01:53.245691004 +0000 UTC m=+0.118252877 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true) Dec 5 04:01:53 localhost podman[100540]: 2025-12-05 09:01:53.259585138 +0000 UTC m=+0.132146951 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 5 04:01:53 localhost podman[100540]: unhealthy Dec 5 04:01:53 localhost podman[100538]: 2025-12-05 09:01:53.267571831 +0000 UTC m=+0.147646603 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:01:53 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:01:53 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:01:53 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:01:53 localhost podman[100539]: 2025-12-05 09:01:53.309409606 +0000 UTC m=+0.181606568 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 04:01:53 localhost podman[100539]: 2025-12-05 09:01:53.32264516 +0000 UTC m=+0.194842152 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 5 04:01:53 localhost podman[100537]: 2025-12-05 09:01:53.221917479 +0000 UTC m=+0.098068561 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 5 04:01:53 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:01:53 localhost podman[100537]: 2025-12-05 09:01:53.40168442 +0000 UTC m=+0.277835432 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:01:53 localhost podman[100537]: unhealthy Dec 5 04:01:53 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:01:53 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:01:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:01:57 localhost podman[100611]: 2025-12-05 09:01:57.208333659 +0000 UTC m=+0.094610026 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd) Dec 5 04:01:57 localhost podman[100611]: 2025-12-05 09:01:57.413222566 +0000 UTC m=+0.299498943 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public) Dec 5 04:01:57 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:02:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:02:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:02:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:02:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:02:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:02:16 localhost podman[100640]: 2025-12-05 09:02:16.196727389 +0000 UTC m=+0.083959911 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 5 04:02:16 localhost podman[100640]: 2025-12-05 09:02:16.248714964 +0000 UTC m=+0.135947496 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute) Dec 5 04:02:16 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:02:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:02:18 localhost podman[100666]: 2025-12-05 09:02:18.213016188 +0000 UTC m=+0.095170463 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4) Dec 5 04:02:18 localhost systemd[1]: tmp-crun.0PH35e.mount: Deactivated successfully. Dec 5 04:02:18 localhost podman[100667]: 2025-12-05 09:02:18.27474338 +0000 UTC m=+0.150157560 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-cron-container, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 04:02:18 localhost podman[100668]: 2025-12-05 09:02:18.32098639 +0000 UTC m=+0.194342507 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:02:18 localhost podman[100667]: 2025-12-05 09:02:18.340191495 +0000 UTC m=+0.215605695 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Dec 5 04:02:18 localhost podman[100666]: 2025-12-05 09:02:18.349557901 +0000 UTC m=+0.231712186 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 04:02:18 localhost podman[100668]: 2025-12-05 09:02:18.348011974 +0000 UTC m=+0.221368061 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, release=1761123044, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:02:18 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:02:18 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:02:18 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:02:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:02:19 localhost podman[100738]: 2025-12-05 09:02:19.194800773 +0000 UTC m=+0.083359002 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:02:19 localhost podman[100738]: 2025-12-05 09:02:19.562616909 +0000 UTC m=+0.451175158 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 04:02:19 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:02:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:02:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:02:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:02:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:02:24 localhost systemd[1]: tmp-crun.2btC5I.mount: Deactivated successfully. Dec 5 04:02:24 localhost podman[100764]: 2025-12-05 09:02:24.21842049 +0000 UTC m=+0.095665117 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 04:02:24 localhost podman[100764]: 2025-12-05 09:02:24.228205358 +0000 UTC m=+0.105449995 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, container_name=collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:02:24 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:02:24 localhost podman[100765]: 2025-12-05 09:02:24.313775428 +0000 UTC m=+0.187182299 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:02:24 localhost podman[100762]: 2025-12-05 09:02:24.356074628 +0000 UTC m=+0.239060010 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, tcib_managed=true) Dec 5 04:02:24 localhost podman[100765]: 2025-12-05 09:02:24.383543605 +0000 UTC m=+0.256950496 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:34:05Z) Dec 5 04:02:24 localhost podman[100765]: unhealthy Dec 5 04:02:24 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:02:24 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:02:24 localhost podman[100762]: 2025-12-05 09:02:24.400806241 +0000 UTC m=+0.283791623 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 04:02:24 localhost podman[100762]: unhealthy Dec 5 04:02:24 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:02:24 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:02:24 localhost podman[100763]: 2025-12-05 09:02:24.471368483 +0000 UTC m=+0.351800268 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 5 04:02:24 localhost podman[100763]: 2025-12-05 09:02:24.48572181 +0000 UTC m=+0.366153635 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 04:02:24 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:02:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:02:28 localhost podman[100835]: 2025-12-05 09:02:28.208360678 +0000 UTC m=+0.087547580 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:02:28 localhost podman[100835]: 2025-12-05 09:02:28.41273025 +0000 UTC m=+0.291917192 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z) Dec 5 04:02:28 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:02:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:02:47 localhost systemd[1]: tmp-crun.CUfj9r.mount: Deactivated successfully. Dec 5 04:02:47 localhost podman[100941]: 2025-12-05 09:02:47.25696123 +0000 UTC m=+0.093952866 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute) Dec 5 04:02:47 localhost podman[100941]: 2025-12-05 09:02:47.317719872 +0000 UTC m=+0.154711438 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step5, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:02:47 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:02:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:02:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:02:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:02:49 localhost systemd[1]: tmp-crun.NF1IHw.mount: Deactivated successfully. Dec 5 04:02:49 localhost podman[100971]: 2025-12-05 09:02:49.224272775 +0000 UTC m=+0.103629631 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, architecture=x86_64, url=https://www.redhat.com) Dec 5 04:02:49 localhost podman[100971]: 2025-12-05 09:02:49.257599911 +0000 UTC m=+0.136956757 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4) Dec 5 04:02:49 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:02:49 localhost podman[100969]: 2025-12-05 09:02:49.276423656 +0000 UTC m=+0.156757561 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12) Dec 5 04:02:49 localhost podman[100969]: 2025-12-05 09:02:49.306366508 +0000 UTC m=+0.186700403 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4) Dec 5 04:02:49 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:02:49 localhost podman[100970]: 2025-12-05 09:02:49.319029735 +0000 UTC m=+0.197233576 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, container_name=logrotate_crond) Dec 5 04:02:49 localhost podman[100970]: 2025-12-05 09:02:49.33265833 +0000 UTC m=+0.210862181 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 5 04:02:49 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:02:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:02:50 localhost podman[101040]: 2025-12-05 09:02:50.192448196 +0000 UTC m=+0.078951829 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, tcib_managed=true) Dec 5 04:02:50 localhost systemd[1]: tmp-crun.ivx09E.mount: Deactivated successfully. Dec 5 04:02:50 localhost podman[101040]: 2025-12-05 09:02:50.594832525 +0000 UTC m=+0.481336118 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute) Dec 5 04:02:50 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:02:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:02:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:02:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:02:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:02:55 localhost podman[101066]: 2025-12-05 09:02:55.236119243 +0000 UTC m=+0.119316139 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:02:55 localhost podman[101072]: 2025-12-05 09:02:55.212914366 +0000 UTC m=+0.085517589 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible) Dec 5 04:02:55 localhost podman[101065]: 2025-12-05 09:02:55.332558383 +0000 UTC m=+0.213532521 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.expose-services=) Dec 5 04:02:55 localhost podman[101066]: 2025-12-05 09:02:55.341899539 +0000 UTC m=+0.225096465 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:02:55 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:02:55 localhost podman[101064]: 2025-12-05 09:02:55.192623497 +0000 UTC m=+0.082169747 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12) Dec 5 04:02:55 localhost podman[101064]: 2025-12-05 09:02:55.427036864 +0000 UTC m=+0.316583144 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044) Dec 5 04:02:55 localhost podman[101064]: unhealthy Dec 5 04:02:55 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:02:55 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:02:55 localhost podman[101065]: 2025-12-05 09:02:55.445753815 +0000 UTC m=+0.326727963 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1) Dec 5 04:02:55 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:02:55 localhost podman[101072]: 2025-12-05 09:02:55.498755472 +0000 UTC m=+0.371358755 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 04:02:55 localhost podman[101072]: unhealthy Dec 5 04:02:55 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:02:55 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:02:56 localhost systemd[1]: tmp-crun.6qnpUz.mount: Deactivated successfully. Dec 5 04:02:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:02:59 localhost systemd[1]: tmp-crun.deFEn5.mount: Deactivated successfully. Dec 5 04:02:59 localhost podman[101140]: 2025-12-05 09:02:59.215988083 +0000 UTC m=+0.101133514 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 5 04:02:59 localhost podman[101140]: 2025-12-05 09:02:59.426902784 +0000 UTC m=+0.312048195 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 5 04:02:59 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:03:09 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:03:09 localhost recover_tripleo_nova_virtqemud[101171]: 61294 Dec 5 04:03:09 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:03:09 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:03:18 localhost podman[101172]: 2025-12-05 09:03:18.204178925 +0000 UTC m=+0.089346245 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:03:18 localhost podman[101172]: 2025-12-05 09:03:18.237836551 +0000 UTC m=+0.123003911 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, batch=17.1_20251118.1) Dec 5 04:03:18 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:03:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:03:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:03:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:03:20 localhost podman[101200]: 2025-12-05 09:03:20.198872765 +0000 UTC m=+0.073119030 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 5 04:03:20 localhost podman[101200]: 2025-12-05 09:03:20.25349147 +0000 UTC m=+0.127737495 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4) Dec 5 04:03:20 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:03:20 localhost podman[101199]: 2025-12-05 09:03:20.259348129 +0000 UTC m=+0.136248545 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, config_id=tripleo_step4, tcib_managed=true, vcs-type=git) Dec 5 04:03:20 localhost systemd[1]: tmp-crun.lNPQ0L.mount: Deactivated successfully. Dec 5 04:03:20 localhost podman[101198]: 2025-12-05 09:03:20.332434407 +0000 UTC m=+0.212254912 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team) Dec 5 04:03:20 localhost podman[101199]: 2025-12-05 09:03:20.339277676 +0000 UTC m=+0.216178072 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron) Dec 5 04:03:20 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:03:20 localhost podman[101198]: 2025-12-05 09:03:20.364756283 +0000 UTC m=+0.244576868 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, release=1761123044, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1) Dec 5 04:03:20 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:03:21 localhost podman[101272]: 2025-12-05 09:03:21.196888906 +0000 UTC m=+0.082710243 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 04:03:21 localhost podman[101272]: 2025-12-05 09:03:21.58666672 +0000 UTC m=+0.472488127 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 5 04:03:21 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:03:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:03:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:03:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:03:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:03:26 localhost podman[101294]: 2025-12-05 09:03:26.20829335 +0000 UTC m=+0.094994838 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:03:26 localhost podman[101295]: 2025-12-05 09:03:26.26472243 +0000 UTC m=+0.144340421 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044) Dec 5 04:03:26 localhost podman[101295]: 2025-12-05 09:03:26.301616275 +0000 UTC m=+0.181234206 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, url=https://www.redhat.com, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64) Dec 5 04:03:26 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:03:26 localhost podman[101296]: 2025-12-05 09:03:26.320193582 +0000 UTC m=+0.196296406 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc.) Dec 5 04:03:26 localhost podman[101296]: 2025-12-05 09:03:26.329046542 +0000 UTC m=+0.205149436 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, container_name=collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:03:26 localhost podman[101294]: 2025-12-05 09:03:26.336069156 +0000 UTC m=+0.222770604 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com) Dec 5 04:03:26 localhost podman[101294]: unhealthy Dec 5 04:03:26 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:03:26 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:03:26 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:03:26 localhost podman[101302]: 2025-12-05 09:03:26.231658572 +0000 UTC m=+0.100994810 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team) Dec 5 04:03:26 localhost podman[101302]: 2025-12-05 09:03:26.415629272 +0000 UTC m=+0.284965540 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:03:26 localhost podman[101302]: unhealthy Dec 5 04:03:26 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:03:26 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:03:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:03:30 localhost podman[101374]: 2025-12-05 09:03:30.212970406 +0000 UTC m=+0.087610392 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 5 04:03:30 localhost podman[101374]: 2025-12-05 09:03:30.406752705 +0000 UTC m=+0.281392711 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Dec 5 04:03:30 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:03:49 localhost podman[101477]: 2025-12-05 09:03:49.203409167 +0000 UTC m=+0.086766007 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:03:49 localhost podman[101477]: 2025-12-05 09:03:49.260071165 +0000 UTC m=+0.143428035 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_compute, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container) Dec 5 04:03:49 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:03:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:03:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:03:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:03:51 localhost podman[101504]: 2025-12-05 09:03:51.210348471 +0000 UTC m=+0.088556572 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute) Dec 5 04:03:51 localhost podman[101505]: 2025-12-05 09:03:51.261707047 +0000 UTC m=+0.137147003 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, release=1761123044) Dec 5 04:03:51 localhost podman[101504]: 2025-12-05 09:03:51.273852987 +0000 UTC m=+0.152061078 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:11:48Z, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 04:03:51 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:03:51 localhost podman[101505]: 2025-12-05 09:03:51.301689786 +0000 UTC m=+0.177129732 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 04:03:51 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:03:51 localhost podman[101506]: 2025-12-05 09:03:51.37658978 +0000 UTC m=+0.247180498 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public) Dec 5 04:03:51 localhost podman[101506]: 2025-12-05 09:03:51.406003367 +0000 UTC m=+0.276594005 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:03:51 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:03:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:03:52 localhost podman[101577]: 2025-12-05 09:03:52.198521442 +0000 UTC m=+0.084143847 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:03:52 localhost podman[101577]: 2025-12-05 09:03:52.648855112 +0000 UTC m=+0.534477527 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:03:52 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:03:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:03:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:03:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:03:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:03:57 localhost podman[101602]: 2025-12-05 09:03:57.202197669 +0000 UTC m=+0.079040071 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd) Dec 5 04:03:57 localhost podman[101600]: 2025-12-05 09:03:57.262502988 +0000 UTC m=+0.142384673 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:03:57 localhost systemd[1]: tmp-crun.vvdMKw.mount: Deactivated successfully. Dec 5 04:03:57 localhost podman[101601]: 2025-12-05 09:03:57.328225612 +0000 UTC m=+0.205251180 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, release=1761123044, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 04:03:57 localhost podman[101600]: 2025-12-05 09:03:57.350227123 +0000 UTC m=+0.230108808 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 5 04:03:57 localhost podman[101600]: unhealthy Dec 5 04:03:57 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:03:57 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:03:57 localhost podman[101601]: 2025-12-05 09:03:57.416350279 +0000 UTC m=+0.293375857 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, version=17.1.12) Dec 5 04:03:57 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:03:57 localhost podman[101602]: 2025-12-05 09:03:57.440175616 +0000 UTC m=+0.317018018 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, managed_by=tripleo_ansible) Dec 5 04:03:57 localhost podman[101603]: 2025-12-05 09:03:57.421554358 +0000 UTC m=+0.294583373 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Dec 5 04:03:57 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:03:57 localhost podman[101603]: 2025-12-05 09:03:57.507674864 +0000 UTC m=+0.380703809 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20251118.1) Dec 5 04:03:57 localhost podman[101603]: unhealthy Dec 5 04:03:57 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:03:57 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:04:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:04:01 localhost podman[101680]: 2025-12-05 09:04:01.20438371 +0000 UTC m=+0.090069577 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, url=https://www.redhat.com, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 04:04:01 localhost podman[101680]: 2025-12-05 09:04:01.427552095 +0000 UTC m=+0.313237932 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, config_id=tripleo_step1) Dec 5 04:04:01 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:04:19 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:04:19 localhost recover_tripleo_nova_virtqemud[101711]: 61294 Dec 5 04:04:19 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:04:19 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:04:20 localhost podman[101712]: 2025-12-05 09:04:20.180941788 +0000 UTC m=+0.069262422 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:04:20 localhost podman[101712]: 2025-12-05 09:04:20.210626073 +0000 UTC m=+0.098946777 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 5 04:04:20 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:04:22 localhost systemd[1]: tmp-crun.IQglQq.mount: Deactivated successfully. Dec 5 04:04:22 localhost systemd[1]: tmp-crun.Qx2MW7.mount: Deactivated successfully. Dec 5 04:04:22 localhost podman[101740]: 2025-12-05 09:04:22.260658582 +0000 UTC m=+0.136380300 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 5 04:04:22 localhost podman[101738]: 2025-12-05 09:04:22.277358881 +0000 UTC m=+0.156754851 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 5 04:04:22 localhost podman[101739]: 2025-12-05 09:04:22.237544066 +0000 UTC m=+0.113320425 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Dec 5 04:04:22 localhost podman[101738]: 2025-12-05 09:04:22.304691494 +0000 UTC m=+0.184087504 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 04:04:22 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:04:22 localhost podman[101739]: 2025-12-05 09:04:22.319439644 +0000 UTC m=+0.195215953 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:32Z) Dec 5 04:04:22 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:04:22 localhost podman[101740]: 2025-12-05 09:04:22.340664101 +0000 UTC m=+0.216385779 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, version=17.1.12, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible) Dec 5 04:04:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:04:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:04:23 localhost podman[101811]: 2025-12-05 09:04:23.192055671 +0000 UTC m=+0.078185386 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:04:23 localhost podman[101811]: 2025-12-05 09:04:23.576906706 +0000 UTC m=+0.463036431 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, tcib_managed=true) Dec 5 04:04:23 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:04:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:04:28 localhost podman[101834]: 2025-12-05 09:04:28.20888456 +0000 UTC m=+0.083518527 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, build-date=2025-11-18T23:44:13Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Dec 5 04:04:28 localhost podman[101834]: 2025-12-05 09:04:28.218655478 +0000 UTC m=+0.093289505 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 04:04:28 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:04:28 localhost podman[101833]: 2025-12-05 09:04:28.260353019 +0000 UTC m=+0.137653578 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 04:04:28 localhost podman[101833]: 2025-12-05 09:04:28.277054389 +0000 UTC m=+0.154354968 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 5 04:04:28 localhost podman[101833]: unhealthy Dec 5 04:04:28 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:04:28 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:04:28 localhost podman[101835]: 2025-12-05 09:04:28.31841187 +0000 UTC m=+0.189072586 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:04:28 localhost podman[101835]: 2025-12-05 09:04:28.330663903 +0000 UTC m=+0.201324659 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 04:04:28 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:04:28 localhost systemd[1]: tmp-crun.QNJ5Ev.mount: Deactivated successfully. Dec 5 04:04:28 localhost podman[101836]: 2025-12-05 09:04:28.385558937 +0000 UTC m=+0.253752278 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z) Dec 5 04:04:28 localhost podman[101836]: 2025-12-05 09:04:28.426556298 +0000 UTC m=+0.294749699 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 5 04:04:28 localhost podman[101836]: unhealthy Dec 5 04:04:28 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:04:28 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:04:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:04:32 localhost podman[101912]: 2025-12-05 09:04:32.188587216 +0000 UTC m=+0.080256479 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Dec 5 04:04:32 localhost podman[101912]: 2025-12-05 09:04:32.398648171 +0000 UTC m=+0.290317374 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Dec 5 04:04:32 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:04:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:04:51 localhost podman[102067]: 2025-12-05 09:04:51.208705053 +0000 UTC m=+0.095984598 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 04:04:51 localhost podman[102067]: 2025-12-05 09:04:51.240586475 +0000 UTC m=+0.127866020 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true) Dec 5 04:04:51 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:04:53 localhost podman[102092]: 2025-12-05 09:04:53.199468993 +0000 UTC m=+0.084297721 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=logrotate_crond, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, version=17.1.12) Dec 5 04:04:53 localhost podman[102092]: 2025-12-05 09:04:53.206701443 +0000 UTC m=+0.091530181 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:04:53 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:04:53 localhost podman[102091]: 2025-12-05 09:04:53.263022271 +0000 UTC m=+0.148625243 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, release=1761123044, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 5 04:04:53 localhost podman[102093]: 2025-12-05 09:04:53.215881934 +0000 UTC m=+0.093352098 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com) Dec 5 04:04:53 localhost podman[102091]: 2025-12-05 09:04:53.289496778 +0000 UTC m=+0.175099730 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64) Dec 5 04:04:53 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:04:53 localhost podman[102093]: 2025-12-05 09:04:53.34497429 +0000 UTC m=+0.222444464 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 04:04:53 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:04:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:04:54 localhost podman[102160]: 2025-12-05 09:04:54.163735655 +0000 UTC m=+0.057996650 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, release=1761123044, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:04:54 localhost systemd[1]: tmp-crun.GsvuJE.mount: Deactivated successfully. Dec 5 04:04:54 localhost podman[102160]: 2025-12-05 09:04:54.546165705 +0000 UTC m=+0.440426710 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 5 04:04:54 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:04:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:04:59 localhost systemd[1]: tmp-crun.W5ty1d.mount: Deactivated successfully. Dec 5 04:04:59 localhost podman[102188]: 2025-12-05 09:04:59.20703639 +0000 UTC m=+0.074879474 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:04:59 localhost podman[102185]: 2025-12-05 09:04:59.254573369 +0000 UTC m=+0.131818010 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:51:28Z, container_name=collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:04:59 localhost podman[102185]: 2025-12-05 09:04:59.267741611 +0000 UTC m=+0.144986242 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-18T22:51:28Z, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 5 04:04:59 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:04:59 localhost podman[102183]: 2025-12-05 09:04:59.317673714 +0000 UTC m=+0.197899336 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, architecture=x86_64, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, vcs-type=git) Dec 5 04:04:59 localhost podman[102188]: 2025-12-05 09:04:59.322986376 +0000 UTC m=+0.190829530 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 04:04:59 localhost podman[102188]: unhealthy Dec 5 04:04:59 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:04:59 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:04:59 localhost podman[102184]: 2025-12-05 09:04:59.372239817 +0000 UTC m=+0.249477728 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, container_name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_step3) Dec 5 04:04:59 localhost podman[102184]: 2025-12-05 09:04:59.382516721 +0000 UTC m=+0.259754662 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, vcs-type=git, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:04:59 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:04:59 localhost podman[102183]: 2025-12-05 09:04:59.435767235 +0000 UTC m=+0.315992837 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 5 04:04:59 localhost podman[102183]: unhealthy Dec 5 04:04:59 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:04:59 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:05:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:05:03 localhost podman[102264]: 2025-12-05 09:05:03.211694108 +0000 UTC m=+0.097594497 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, vcs-type=git, container_name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 5 04:05:03 localhost podman[102264]: 2025-12-05 09:05:03.388688954 +0000 UTC m=+0.274589323 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:05:03 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:05:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:05:22 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:05:22 localhost recover_tripleo_nova_virtqemud[102301]: 61294 Dec 5 04:05:22 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:05:22 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:05:22 localhost systemd[1]: tmp-crun.2TLE2N.mount: Deactivated successfully. Dec 5 04:05:22 localhost podman[102293]: 2025-12-05 09:05:22.204229742 +0000 UTC m=+0.090681456 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64) Dec 5 04:05:22 localhost podman[102293]: 2025-12-05 09:05:22.258424925 +0000 UTC m=+0.144876649 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:05:22 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:05:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:05:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:05:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:05:24 localhost systemd[1]: tmp-crun.mkUqSP.mount: Deactivated successfully. Dec 5 04:05:24 localhost podman[102323]: 2025-12-05 09:05:24.217168508 +0000 UTC m=+0.092229583 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Dec 5 04:05:24 localhost podman[102321]: 2025-12-05 09:05:24.271960079 +0000 UTC m=+0.155866034 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:05:24 localhost podman[102322]: 2025-12-05 09:05:24.19100292 +0000 UTC m=+0.074963137 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container) Dec 5 04:05:24 localhost podman[102321]: 2025-12-05 09:05:24.302174059 +0000 UTC m=+0.186080014 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:05:24 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:05:24 localhost podman[102322]: 2025-12-05 09:05:24.323934473 +0000 UTC m=+0.207894710 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:05:24 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:05:24 localhost podman[102323]: 2025-12-05 09:05:24.375801385 +0000 UTC m=+0.250862460 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 5 04:05:24 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:05:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:05:25 localhost systemd[1]: tmp-crun.7hoR9i.mount: Deactivated successfully. Dec 5 04:05:25 localhost podman[102398]: 2025-12-05 09:05:25.218421627 +0000 UTC m=+0.102310700 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:05:25 localhost podman[102398]: 2025-12-05 09:05:25.643801327 +0000 UTC m=+0.527690400 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044) Dec 5 04:05:25 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:05:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:05:30 localhost podman[102423]: 2025-12-05 09:05:30.201431286 +0000 UTC m=+0.083875549 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 5 04:05:30 localhost podman[102421]: 2025-12-05 09:05:30.263208179 +0000 UTC m=+0.148431787 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:05:30 localhost podman[102421]: 2025-12-05 09:05:30.274752291 +0000 UTC m=+0.159975959 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 04:05:30 localhost podman[102421]: unhealthy Dec 5 04:05:30 localhost podman[102423]: 2025-12-05 09:05:30.285499959 +0000 UTC m=+0.167944252 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:05:30 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:05:30 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:05:30 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:05:30 localhost podman[102422]: 2025-12-05 09:05:30.382102105 +0000 UTC m=+0.264473236 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container) Dec 5 04:05:30 localhost podman[102422]: 2025-12-05 09:05:30.393029158 +0000 UTC m=+0.275400319 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=) Dec 5 04:05:30 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:05:30 localhost podman[102424]: 2025-12-05 09:05:30.47117606 +0000 UTC m=+0.346651380 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12) Dec 5 04:05:30 localhost podman[102424]: 2025-12-05 09:05:30.51576333 +0000 UTC m=+0.391238600 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ovn_controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1) Dec 5 04:05:30 localhost podman[102424]: unhealthy Dec 5 04:05:30 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:05:30 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:05:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:05:34 localhost podman[102496]: 2025-12-05 09:05:34.195850851 +0000 UTC m=+0.083776225 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:05:34 localhost podman[102496]: 2025-12-05 09:05:34.405731981 +0000 UTC m=+0.293657375 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible) Dec 5 04:05:34 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:05:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:05:53 localhost podman[102603]: 2025-12-05 09:05:53.186625911 +0000 UTC m=+0.074937146 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, build-date=2025-11-19T00:36:58Z, container_name=nova_compute) Dec 5 04:05:53 localhost podman[102603]: 2025-12-05 09:05:53.22496594 +0000 UTC m=+0.113277175 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 5 04:05:53 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:05:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:05:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:05:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:05:55 localhost podman[102629]: 2025-12-05 09:05:55.19539072 +0000 UTC m=+0.080840976 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team) Dec 5 04:05:55 localhost podman[102629]: 2025-12-05 09:05:55.232700187 +0000 UTC m=+0.118150373 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute) Dec 5 04:05:55 localhost systemd[1]: tmp-crun.Y5jZFS.mount: Deactivated successfully. Dec 5 04:05:55 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:05:55 localhost podman[102630]: 2025-12-05 09:05:55.247012525 +0000 UTC m=+0.129745848 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:05:55 localhost systemd[1]: tmp-crun.nAoxw7.mount: Deactivated successfully. Dec 5 04:05:55 localhost podman[102631]: 2025-12-05 09:05:55.307487168 +0000 UTC m=+0.184199457 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:05:55 localhost podman[102631]: 2025-12-05 09:05:55.330631813 +0000 UTC m=+0.207344082 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 04:05:55 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:05:55 localhost podman[102630]: 2025-12-05 09:05:55.380879355 +0000 UTC m=+0.263612688 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:05:55 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:05:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:05:56 localhost podman[102701]: 2025-12-05 09:05:56.23174732 +0000 UTC m=+0.113877254 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, version=17.1.12) Dec 5 04:05:56 localhost podman[102701]: 2025-12-05 09:05:56.598795051 +0000 UTC m=+0.480924995 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1761123044) Dec 5 04:05:56 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:06:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:06:01 localhost systemd[1]: tmp-crun.osIXFd.mount: Deactivated successfully. Dec 5 04:06:01 localhost podman[102724]: 2025-12-05 09:06:01.267755643 +0000 UTC m=+0.153054959 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public) Dec 5 04:06:01 localhost podman[102726]: 2025-12-05 09:06:01.334773816 +0000 UTC m=+0.208460138 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:06:01 localhost podman[102726]: 2025-12-05 09:06:01.371710552 +0000 UTC m=+0.245396924 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:06:01 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:06:01 localhost podman[102732]: 2025-12-05 09:06:01.384693038 +0000 UTC m=+0.256374968 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container) Dec 5 04:06:01 localhost podman[102724]: 2025-12-05 09:06:01.401591574 +0000 UTC m=+0.286890880 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible) Dec 5 04:06:01 localhost podman[102724]: unhealthy Dec 5 04:06:01 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:06:01 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:06:01 localhost podman[102732]: 2025-12-05 09:06:01.425196463 +0000 UTC m=+0.296878313 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:06:01 localhost podman[102732]: unhealthy Dec 5 04:06:01 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:06:01 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:06:01 localhost podman[102725]: 2025-12-05 09:06:01.307178585 +0000 UTC m=+0.189365686 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, container_name=iscsid, managed_by=tripleo_ansible) Dec 5 04:06:01 localhost podman[102725]: 2025-12-05 09:06:01.542657865 +0000 UTC m=+0.424845026 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, tcib_managed=true) Dec 5 04:06:01 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:06:02 localhost systemd[1]: tmp-crun.EnsT8F.mount: Deactivated successfully. Dec 5 04:06:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:06:05 localhost podman[102802]: 2025-12-05 09:06:05.19305253 +0000 UTC m=+0.080478165 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:06:05 localhost podman[102802]: 2025-12-05 09:06:05.437376389 +0000 UTC m=+0.324801974 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, config_id=tripleo_step1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 5 04:06:05 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:06:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:06:24 localhost systemd[1]: tmp-crun.UdAPu0.mount: Deactivated successfully. Dec 5 04:06:24 localhost podman[102831]: 2025-12-05 09:06:24.213710333 +0000 UTC m=+0.098814215 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 5 04:06:24 localhost podman[102831]: 2025-12-05 09:06:24.248828253 +0000 UTC m=+0.133932175 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 04:06:24 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:06:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:06:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:06:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:06:26 localhost podman[102857]: 2025-12-05 09:06:26.210343362 +0000 UTC m=+0.098109653 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 5 04:06:26 localhost podman[102857]: 2025-12-05 09:06:26.263499502 +0000 UTC m=+0.151265743 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Dec 5 04:06:26 localhost systemd[1]: tmp-crun.ukhMB1.mount: Deactivated successfully. Dec 5 04:06:26 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:06:26 localhost podman[102858]: 2025-12-05 09:06:26.319151589 +0000 UTC m=+0.203439203 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:06:26 localhost podman[102859]: 2025-12-05 09:06:26.275941342 +0000 UTC m=+0.157313917 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, version=17.1.12) Dec 5 04:06:26 localhost podman[102858]: 2025-12-05 09:06:26.355488497 +0000 UTC m=+0.239776141 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron) Dec 5 04:06:26 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:06:26 localhost podman[102859]: 2025-12-05 09:06:26.411231597 +0000 UTC m=+0.292604162 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 5 04:06:26 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:06:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:06:27 localhost podman[102929]: 2025-12-05 09:06:27.193170159 +0000 UTC m=+0.077539485 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:06:27 localhost podman[102929]: 2025-12-05 09:06:27.534831967 +0000 UTC m=+0.419201343 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64) Dec 5 04:06:27 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:06:32 localhost podman[102951]: 2025-12-05 09:06:32.405730565 +0000 UTC m=+0.291999364 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, release=1761123044, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:06:32 localhost systemd[1]: tmp-crun.ZDrio6.mount: Deactivated successfully. Dec 5 04:06:32 localhost podman[102952]: 2025-12-05 09:06:32.424102796 +0000 UTC m=+0.302746033 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 5 04:06:32 localhost podman[102951]: 2025-12-05 09:06:32.449782289 +0000 UTC m=+0.336051158 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent) Dec 5 04:06:32 localhost podman[102951]: unhealthy Dec 5 04:06:32 localhost podman[102952]: 2025-12-05 09:06:32.463660512 +0000 UTC m=+0.342303819 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:06:32 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:06:32 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:06:32 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:06:32 localhost podman[102953]: 2025-12-05 09:06:32.468393156 +0000 UTC m=+0.345101233 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 04:06:32 localhost podman[102959]: 2025-12-05 09:06:32.522039111 +0000 UTC m=+0.396043516 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:34:05Z, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 04:06:32 localhost podman[102959]: 2025-12-05 09:06:32.541599978 +0000 UTC m=+0.415604363 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, release=1761123044) Dec 5 04:06:32 localhost podman[102959]: unhealthy Dec 5 04:06:32 localhost podman[102953]: 2025-12-05 09:06:32.551595263 +0000 UTC m=+0.428303300 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, container_name=collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3) Dec 5 04:06:32 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:06:32 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:06:32 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:06:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:06:36 localhost podman[103031]: 2025-12-05 09:06:36.180819064 +0000 UTC m=+0.072246094 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public) Dec 5 04:06:36 localhost podman[103031]: 2025-12-05 09:06:36.441938796 +0000 UTC m=+0.333365836 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, config_id=tripleo_step1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:06:36 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:06:38 localhost podman[103160]: 2025-12-05 09:06:38.127540121 +0000 UTC m=+0.084816407 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container) Dec 5 04:06:38 localhost podman[103160]: 2025-12-05 09:06:38.259863236 +0000 UTC m=+0.217139612 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, RELEASE=main, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:06:49 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:06:49 localhost recover_tripleo_nova_virtqemud[103300]: 61294 Dec 5 04:06:49 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:06:49 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:06:55 localhost systemd[1]: tmp-crun.408GQU.mount: Deactivated successfully. Dec 5 04:06:55 localhost podman[103301]: 2025-12-05 09:06:55.205045613 +0000 UTC m=+0.082866568 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:06:55 localhost podman[103301]: 2025-12-05 09:06:55.261776143 +0000 UTC m=+0.139597128 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 04:06:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Deactivated successfully. Dec 5 04:06:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:06:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:06:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:06:57 localhost systemd[1]: tmp-crun.LFXHKQ.mount: Deactivated successfully. Dec 5 04:06:57 localhost podman[103327]: 2025-12-05 09:06:57.204699625 +0000 UTC m=+0.085238010 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, version=17.1.12) Dec 5 04:06:57 localhost podman[103327]: 2025-12-05 09:06:57.214768292 +0000 UTC m=+0.095306617 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 5 04:06:57 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:06:57 localhost podman[103326]: 2025-12-05 09:06:57.305462707 +0000 UTC m=+0.185623520 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ceilometer_agent_compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:06:57 localhost podman[103328]: 2025-12-05 09:06:57.354594946 +0000 UTC m=+0.234903984 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:06:57 localhost podman[103326]: 2025-12-05 09:06:57.385642333 +0000 UTC m=+0.265803116 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public) Dec 5 04:06:57 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Deactivated successfully. Dec 5 04:06:57 localhost podman[103328]: 2025-12-05 09:06:57.410831981 +0000 UTC m=+0.291140979 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 04:06:57 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Deactivated successfully. Dec 5 04:06:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:06:58 localhost podman[103398]: 2025-12-05 09:06:58.198046824 +0000 UTC m=+0.079486084 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 5 04:06:58 localhost podman[103398]: 2025-12-05 09:06:58.55474474 +0000 UTC m=+0.436184090 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 04:06:58 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:07:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:07:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:07:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:07:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:07:03 localhost podman[103423]: 2025-12-05 09:07:03.190109519 +0000 UTC m=+0.068097717 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, container_name=collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible) Dec 5 04:07:03 localhost podman[103421]: 2025-12-05 09:07:03.201360422 +0000 UTC m=+0.078063282 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git) Dec 5 04:07:03 localhost podman[103421]: 2025-12-05 09:07:03.214557384 +0000 UTC m=+0.091260224 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 04:07:03 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:07:03 localhost podman[103423]: 2025-12-05 09:07:03.229031065 +0000 UTC m=+0.107019263 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 5 04:07:03 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:07:03 localhost systemd[1]: tmp-crun.n0srHg.mount: Deactivated successfully. Dec 5 04:07:03 localhost podman[103428]: 2025-12-05 09:07:03.313388287 +0000 UTC m=+0.182180445 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team) Dec 5 04:07:03 localhost podman[103428]: 2025-12-05 09:07:03.347198428 +0000 UTC m=+0.215990526 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 5 04:07:03 localhost podman[103420]: 2025-12-05 09:07:03.292324135 +0000 UTC m=+0.178867244 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 5 04:07:03 localhost podman[103428]: unhealthy Dec 5 04:07:03 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:03 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:07:03 localhost podman[103420]: 2025-12-05 09:07:03.430764596 +0000 UTC m=+0.317307695 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 5 04:07:03 localhost podman[103420]: unhealthy Dec 5 04:07:03 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:03 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:07:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:07:07 localhost podman[103497]: 2025-12-05 09:07:07.205985976 +0000 UTC m=+0.095314877 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, name=rhosp17/openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 5 04:07:07 localhost podman[103497]: 2025-12-05 09:07:07.403627533 +0000 UTC m=+0.292956354 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 5 04:07:07 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:07:26 localhost podman[103526]: 2025-12-05 09:07:26.208604569 +0000 UTC m=+0.092114610 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64) Dec 5 04:07:26 localhost podman[103526]: 2025-12-05 09:07:26.228697882 +0000 UTC m=+0.112207913 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 5 04:07:26 localhost podman[103526]: unhealthy Dec 5 04:07:26 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:26 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:07:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:07:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:07:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:07:28 localhost systemd[1]: tmp-crun.y7jvat.mount: Deactivated successfully. Dec 5 04:07:28 localhost podman[103549]: 2025-12-05 09:07:28.220037179 +0000 UTC m=+0.094859093 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-cron-container) Dec 5 04:07:28 localhost podman[103549]: 2025-12-05 09:07:28.230368394 +0000 UTC m=+0.105190318 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-cron-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4) Dec 5 04:07:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:07:28 localhost podman[103550]: 2025-12-05 09:07:28.319992566 +0000 UTC m=+0.192079867 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:07:28 localhost podman[103550]: 2025-12-05 09:07:28.361906704 +0000 UTC m=+0.233993986 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:07:28 localhost podman[103550]: unhealthy Dec 5 04:07:28 localhost podman[103548]: 2025-12-05 09:07:28.377590093 +0000 UTC m=+0.254726858 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:07:28 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:28 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 04:07:28 localhost podman[103548]: 2025-12-05 09:07:28.394255491 +0000 UTC m=+0.271392336 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044) Dec 5 04:07:28 localhost podman[103548]: unhealthy Dec 5 04:07:28 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:28 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed with result 'exit-code'. Dec 5 04:07:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:07:29 localhost podman[103603]: 2025-12-05 09:07:29.183103124 +0000 UTC m=+0.071349856 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 5 04:07:29 localhost podman[103603]: 2025-12-05 09:07:29.550622961 +0000 UTC m=+0.438869643 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com) Dec 5 04:07:29 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:07:34 localhost podman[103635]: 2025-12-05 09:07:34.233325592 +0000 UTC m=+0.100492316 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Dec 5 04:07:34 localhost podman[103628]: 2025-12-05 09:07:34.198850481 +0000 UTC m=+0.074717920 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 5 04:07:34 localhost podman[103635]: 2025-12-05 09:07:34.250879708 +0000 UTC m=+0.118046442 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 04:07:34 localhost podman[103635]: unhealthy Dec 5 04:07:34 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:34 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:07:34 localhost podman[103627]: 2025-12-05 09:07:34.304820942 +0000 UTC m=+0.188620883 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 5 04:07:34 localhost podman[103628]: 2025-12-05 09:07:34.38381344 +0000 UTC m=+0.259680939 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 5 04:07:34 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:07:34 localhost podman[103632]: 2025-12-05 09:07:34.356718594 +0000 UTC m=+0.228161428 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, version=17.1.12, config_id=tripleo_step3, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 04:07:34 localhost podman[103632]: 2025-12-05 09:07:34.437335362 +0000 UTC m=+0.308778156 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Dec 5 04:07:34 localhost podman[103627]: 2025-12-05 09:07:34.440323433 +0000 UTC m=+0.324123334 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true) Dec 5 04:07:34 localhost podman[103627]: unhealthy Dec 5 04:07:34 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:34 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:07:34 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:07:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:07:38 localhost podman[103706]: 2025-12-05 09:07:38.201023152 +0000 UTC m=+0.093283416 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, container_name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z) Dec 5 04:07:38 localhost podman[103706]: 2025-12-05 09:07:38.410808298 +0000 UTC m=+0.303068602 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, release=1761123044, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 04:07:38 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:07:57 localhost systemd[1]: tmp-crun.LTlBGW.mount: Deactivated successfully. Dec 5 04:07:57 localhost podman[103814]: 2025-12-05 09:07:57.250382011 +0000 UTC m=+0.097965818 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step5, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.12, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc.) Dec 5 04:07:57 localhost podman[103814]: 2025-12-05 09:07:57.297686293 +0000 UTC m=+0.145270060 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:07:57 localhost podman[103814]: unhealthy Dec 5 04:07:57 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:57 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:07:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:07:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:07:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:07:59 localhost systemd[1]: tmp-crun.uEG63U.mount: Deactivated successfully. Dec 5 04:07:59 localhost systemd[1]: tmp-crun.p1I6D1.mount: Deactivated successfully. Dec 5 04:07:59 localhost podman[103839]: 2025-12-05 09:07:59.256313714 +0000 UTC m=+0.134602596 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044) Dec 5 04:07:59 localhost podman[103839]: 2025-12-05 09:07:59.271480936 +0000 UTC m=+0.149769818 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:07:59 localhost podman[103838]: 2025-12-05 09:07:59.309704912 +0000 UTC m=+0.191397588 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Dec 5 04:07:59 localhost podman[103838]: 2025-12-05 09:07:59.323559254 +0000 UTC m=+0.205251900 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com) Dec 5 04:07:59 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:07:59 localhost podman[103839]: unhealthy Dec 5 04:07:59 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:59 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 04:07:59 localhost podman[103837]: 2025-12-05 09:07:59.225194244 +0000 UTC m=+0.109572102 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z) Dec 5 04:07:59 localhost podman[103837]: 2025-12-05 09:07:59.458535269 +0000 UTC m=+0.342913097 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:07:59 localhost podman[103837]: unhealthy Dec 5 04:07:59 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:07:59 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed with result 'exit-code'. Dec 5 04:08:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:08:00 localhost podman[103892]: 2025-12-05 09:08:00.202367259 +0000 UTC m=+0.088809018 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:08:00 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:08:00 localhost recover_tripleo_nova_virtqemud[103915]: 61294 Dec 5 04:08:00 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:08:00 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:08:00 localhost podman[103892]: 2025-12-05 09:08:00.596855138 +0000 UTC m=+0.483296867 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target) Dec 5 04:08:00 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:08:05 localhost podman[103917]: 2025-12-05 09:08:05.2165815 +0000 UTC m=+0.098591447 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Dec 5 04:08:05 localhost podman[103917]: 2025-12-05 09:08:05.233543877 +0000 UTC m=+0.115553844 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 04:08:05 localhost podman[103917]: unhealthy Dec 5 04:08:05 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:05 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:08:05 localhost podman[103919]: 2025-12-05 09:08:05.280154668 +0000 UTC m=+0.149996505 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z) Dec 5 04:08:05 localhost podman[103919]: 2025-12-05 09:08:05.293348191 +0000 UTC m=+0.163189968 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 04:08:05 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:08:05 localhost podman[103918]: 2025-12-05 09:08:05.38909697 +0000 UTC m=+0.263901967 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-iscsid) Dec 5 04:08:05 localhost podman[103925]: 2025-12-05 09:08:05.422321524 +0000 UTC m=+0.286603541 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller) Dec 5 04:08:05 localhost podman[103918]: 2025-12-05 09:08:05.449390618 +0000 UTC m=+0.324195605 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid) Dec 5 04:08:05 localhost podman[103925]: 2025-12-05 09:08:05.461008162 +0000 UTC m=+0.325290179 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller) Dec 5 04:08:05 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:08:05 localhost podman[103925]: unhealthy Dec 5 04:08:05 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:05 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:08:09 localhost podman[103995]: 2025-12-05 09:08:09.192888433 +0000 UTC m=+0.081208078 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=metrics_qdr, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:08:09 localhost podman[103995]: 2025-12-05 09:08:09.38171393 +0000 UTC m=+0.270033625 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true) Dec 5 04:08:09 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:08:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28637 DF PROTO=TCP SPT=36226 DPT=9102 SEQ=1559064704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35B8E60000000001030307) Dec 5 04:08:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28638 DF PROTO=TCP SPT=36226 DPT=9102 SEQ=1559064704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35BD050000000001030307) Dec 5 04:08:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48735 DF PROTO=TCP SPT=37310 DPT=9100 SEQ=4289559065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35C4C60000000001030307) Dec 5 04:08:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28639 DF PROTO=TCP SPT=36226 DPT=9102 SEQ=1559064704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35C5050000000001030307) Dec 5 04:08:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48736 DF PROTO=TCP SPT=37310 DPT=9100 SEQ=4289559065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35C8C60000000001030307) Dec 5 04:08:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48737 DF PROTO=TCP SPT=37310 DPT=9100 SEQ=4289559065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35D0C50000000001030307) Dec 5 04:08:18 localhost sshd[104024]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:08:18 localhost systemd-logind[760]: New session 36 of user zuul. Dec 5 04:08:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28640 DF PROTO=TCP SPT=36226 DPT=9102 SEQ=1559064704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35D4C60000000001030307) Dec 5 04:08:18 localhost systemd[1]: Started Session 36 of User zuul. Dec 5 04:08:19 localhost python3.9[104119]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35935 DF PROTO=TCP SPT=57586 DPT=9101 SEQ=708683241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35D8B20000000001030307) Dec 5 04:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21489 DF PROTO=TCP SPT=60572 DPT=9882 SEQ=639172972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35DAAB0000000001030307) Dec 5 04:08:20 localhost python3.9[104213]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:08:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35936 DF PROTO=TCP SPT=57586 DPT=9101 SEQ=708683241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35DCC50000000001030307) Dec 5 04:08:20 localhost python3.9[104306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:08:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21490 DF PROTO=TCP SPT=60572 DPT=9882 SEQ=639172972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35DEC50000000001030307) Dec 5 04:08:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48738 DF PROTO=TCP SPT=37310 DPT=9100 SEQ=4289559065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35E0850000000001030307) Dec 5 04:08:21 localhost python3.9[104400]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:08:22 localhost python3.9[104493]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:08:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35937 DF PROTO=TCP SPT=57586 DPT=9101 SEQ=708683241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35E4C50000000001030307) Dec 5 04:08:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21491 DF PROTO=TCP SPT=60572 DPT=9882 SEQ=639172972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35E6C50000000001030307) Dec 5 04:08:23 localhost python3.9[104584]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 5 04:08:24 localhost python3.9[104674]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:08:25 localhost python3.9[104766]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 5 04:08:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28641 DF PROTO=TCP SPT=36226 DPT=9102 SEQ=1559064704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35F4470000000001030307) Dec 5 04:08:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35938 DF PROTO=TCP SPT=57586 DPT=9101 SEQ=708683241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35F4850000000001030307) Dec 5 04:08:26 localhost python3.9[104856]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:08:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21492 DF PROTO=TCP SPT=60572 DPT=9882 SEQ=639172972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35F6850000000001030307) Dec 5 04:08:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58050 DF PROTO=TCP SPT=38854 DPT=9105 SEQ=3373099246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35F7310000000001030307) Dec 5 04:08:27 localhost python3.9[104904]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:08:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58051 DF PROTO=TCP SPT=38854 DPT=9105 SEQ=3373099246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC35FB450000000001030307) Dec 5 04:08:28 localhost podman[104920]: 2025-12-05 09:08:28.213193904 +0000 UTC m=+0.091471800 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:08:28 localhost podman[104920]: 2025-12-05 09:08:28.235110872 +0000 UTC m=+0.113388798 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.12, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:08:28 localhost podman[104920]: unhealthy Dec 5 04:08:28 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:28 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:08:28 localhost systemd[1]: session-36.scope: Deactivated successfully. Dec 5 04:08:28 localhost systemd[1]: session-36.scope: Consumed 5.051s CPU time. Dec 5 04:08:28 localhost systemd-logind[760]: Session 36 logged out. Waiting for processes to exit. Dec 5 04:08:28 localhost systemd-logind[760]: Removed session 36. Dec 5 04:08:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48739 DF PROTO=TCP SPT=37310 DPT=9100 SEQ=4289559065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3600450000000001030307) Dec 5 04:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:08:30 localhost systemd[1]: tmp-crun.MybRnR.mount: Deactivated successfully. Dec 5 04:08:30 localhost podman[104945]: 2025-12-05 09:08:30.199425335 +0000 UTC m=+0.081789305 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1) Dec 5 04:08:30 localhost podman[104943]: 2025-12-05 09:08:30.219904999 +0000 UTC m=+0.103238158 container health_status 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 5 04:08:30 localhost podman[104943]: 2025-12-05 09:08:30.260154507 +0000 UTC m=+0.143487646 container exec_died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 5 04:08:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58052 DF PROTO=TCP SPT=38854 DPT=9105 SEQ=3373099246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3603460000000001030307) Dec 5 04:08:30 localhost podman[104943]: unhealthy Dec 5 04:08:30 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:30 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed with result 'exit-code'. Dec 5 04:08:30 localhost podman[104945]: 2025-12-05 09:08:30.294805243 +0000 UTC m=+0.177169153 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com) Dec 5 04:08:30 localhost podman[104945]: unhealthy Dec 5 04:08:30 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:30 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 04:08:30 localhost podman[104944]: 2025-12-05 09:08:30.269550353 +0000 UTC m=+0.151008445 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 5 04:08:30 localhost podman[104944]: 2025-12-05 09:08:30.352842613 +0000 UTC m=+0.234300665 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-cron) Dec 5 04:08:30 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:08:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:08:31 localhost podman[105003]: 2025-12-05 09:08:31.172334681 +0000 UTC m=+0.065074405 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 5 04:08:31 localhost systemd[1]: tmp-crun.psQjIH.mount: Deactivated successfully. Dec 5 04:08:31 localhost podman[105003]: 2025-12-05 09:08:31.559700692 +0000 UTC m=+0.452440436 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public) Dec 5 04:08:31 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58053 DF PROTO=TCP SPT=38854 DPT=9105 SEQ=3373099246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3613060000000001030307) Dec 5 04:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:08:36 localhost systemd[1]: tmp-crun.9V6JHb.mount: Deactivated successfully. Dec 5 04:08:36 localhost podman[105035]: 2025-12-05 09:08:36.242235667 +0000 UTC m=+0.094620156 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 5 04:08:36 localhost podman[105027]: 2025-12-05 09:08:36.210788749 +0000 UTC m=+0.077664090 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:08:36 localhost podman[105027]: 2025-12-05 09:08:36.293701717 +0000 UTC m=+0.160577058 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 5 04:08:36 localhost podman[105027]: unhealthy Dec 5 04:08:36 localhost podman[105035]: 2025-12-05 09:08:36.300850905 +0000 UTC m=+0.153235404 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:08:36 localhost podman[105035]: unhealthy Dec 5 04:08:36 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:36 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:08:36 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:36 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:08:36 localhost podman[105029]: 2025-12-05 09:08:36.333569973 +0000 UTC m=+0.190154580 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, container_name=collectd, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:08:36 localhost podman[105029]: 2025-12-05 09:08:36.346494726 +0000 UTC m=+0.203079333 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, architecture=x86_64, tcib_managed=true, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, container_name=collectd) Dec 5 04:08:36 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:08:36 localhost podman[105028]: 2025-12-05 09:08:36.389332833 +0000 UTC m=+0.251016335 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 5 04:08:36 localhost podman[105028]: 2025-12-05 09:08:36.399707019 +0000 UTC m=+0.261390511 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public) Dec 5 04:08:36 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:08:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:08:40 localhost systemd[1]: tmp-crun.x2s1As.mount: Deactivated successfully. Dec 5 04:08:40 localhost podman[105104]: 2025-12-05 09:08:40.215653634 +0000 UTC m=+0.102460266 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, architecture=x86_64) Dec 5 04:08:40 localhost podman[105104]: 2025-12-05 09:08:40.495635541 +0000 UTC m=+0.382442203 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:08:40 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:08:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20364 DF PROTO=TCP SPT=43420 DPT=9102 SEQ=2057909069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC362E170000000001030307) Dec 5 04:08:42 localhost sshd[105176]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:08:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20365 DF PROTO=TCP SPT=43420 DPT=9102 SEQ=2057909069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3632050000000001030307) Dec 5 04:08:42 localhost systemd-logind[760]: New session 37 of user zuul. Dec 5 04:08:42 localhost systemd[1]: Started Session 37 of User zuul. Dec 5 04:08:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58054 DF PROTO=TCP SPT=38854 DPT=9105 SEQ=3373099246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3634450000000001030307) Dec 5 04:08:43 localhost python3.9[105303]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:08:43 localhost systemd[1]: Reloading. Dec 5 04:08:43 localhost systemd-rc-local-generator[105328]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:08:43 localhost systemd-sysv-generator[105333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:08:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:08:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47387 DF PROTO=TCP SPT=55684 DPT=9100 SEQ=3069143409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3639F60000000001030307) Dec 5 04:08:44 localhost python3.9[105429]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:08:44 localhost network[105446]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:08:44 localhost network[105447]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:08:44 localhost network[105448]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:08:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:08:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47389 DF PROTO=TCP SPT=55684 DPT=9100 SEQ=3069143409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3646050000000001030307) Dec 5 04:08:49 localhost python3.9[105645]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:08:50 localhost network[105662]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:08:50 localhost network[105663]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:08:50 localhost network[105664]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:08:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=157 DF PROTO=TCP SPT=39752 DPT=9101 SEQ=4182714626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3652050000000001030307) Dec 5 04:08:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:08:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=159 DF PROTO=TCP SPT=39752 DPT=9101 SEQ=4182714626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3669C50000000001030307) Dec 5 04:08:56 localhost python3.9[105863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:08:56 localhost systemd[1]: Reloading. Dec 5 04:08:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20368 DF PROTO=TCP SPT=43420 DPT=9102 SEQ=2057909069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC366A450000000001030307) Dec 5 04:08:56 localhost systemd-rc-local-generator[105888]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:08:56 localhost systemd-sysv-generator[105892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:08:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:08:56 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 5 04:08:57 localhost systemd[1]: tmp-crun.xsXHpF.mount: Deactivated successfully. Dec 5 04:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:08:59 localhost podman[105917]: 2025-12-05 09:08:59.178452822 +0000 UTC m=+0.063859738 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:08:59 localhost podman[105917]: 2025-12-05 09:08:59.2016872 +0000 UTC m=+0.087094106 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 5 04:08:59 localhost podman[105917]: unhealthy Dec 5 04:08:59 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:08:59 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:08:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47391 DF PROTO=TCP SPT=55684 DPT=9100 SEQ=3069143409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3676460000000001030307) Dec 5 04:09:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:09:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:09:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:09:01 localhost podman[105942]: Error: container 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d is not running Dec 5 04:09:01 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Main process exited, code=exited, status=125/n/a Dec 5 04:09:01 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed with result 'exit-code'. Dec 5 04:09:01 localhost podman[105943]: 2025-12-05 09:09:01.201648411 +0000 UTC m=+0.081581998 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 5 04:09:01 localhost podman[105943]: 2025-12-05 09:09:01.211829962 +0000 UTC m=+0.091763589 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, architecture=x86_64, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 5 04:09:01 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:09:01 localhost systemd[1]: tmp-crun.7MpmGT.mount: Deactivated successfully. Dec 5 04:09:01 localhost podman[105949]: 2025-12-05 09:09:01.264391304 +0000 UTC m=+0.138997059 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Dec 5 04:09:01 localhost podman[105949]: 2025-12-05 09:09:01.284082174 +0000 UTC m=+0.158687939 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 04:09:01 localhost podman[105949]: unhealthy Dec 5 04:09:01 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:01 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 04:09:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:09:02 localhost systemd[1]: tmp-crun.E9Is21.mount: Deactivated successfully. Dec 5 04:09:02 localhost podman[105994]: 2025-12-05 09:09:02.194143123 +0000 UTC m=+0.080457605 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:09:02 localhost podman[105994]: 2025-12-05 09:09:02.582852055 +0000 UTC m=+0.469166517 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 5 04:09:02 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23282 DF PROTO=TCP SPT=45742 DPT=9105 SEQ=3794790945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3688460000000001030307) Dec 5 04:09:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16328 DF PROTO=TCP SPT=53526 DPT=9882 SEQ=9726397 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC368C460000000001030307) Dec 5 04:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:09:06 localhost podman[106017]: 2025-12-05 09:09:06.462235292 +0000 UTC m=+0.082042563 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-collectd-container, container_name=collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 5 04:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:09:06 localhost podman[106017]: 2025-12-05 09:09:06.505082978 +0000 UTC m=+0.124890239 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 5 04:09:06 localhost systemd[1]: tmp-crun.OKNTuV.mount: Deactivated successfully. Dec 5 04:09:06 localhost podman[106015]: 2025-12-05 09:09:06.513737762 +0000 UTC m=+0.140752183 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:09:06 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:09:06 localhost podman[106015]: 2025-12-05 09:09:06.526604545 +0000 UTC m=+0.153618956 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent) Dec 5 04:09:06 localhost podman[106015]: unhealthy Dec 5 04:09:06 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:06 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:09:06 localhost podman[106057]: 2025-12-05 09:09:06.569739929 +0000 UTC m=+0.082154085 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, version=17.1.12, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git) Dec 5 04:09:06 localhost podman[106057]: 2025-12-05 09:09:06.576717823 +0000 UTC m=+0.089131969 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:09:06 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:09:06 localhost podman[106016]: 2025-12-05 09:09:06.654928148 +0000 UTC m=+0.281176115 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 5 04:09:06 localhost podman[106016]: 2025-12-05 09:09:06.697739173 +0000 UTC m=+0.323987100 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 5 04:09:06 localhost podman[106016]: unhealthy Dec 5 04:09:06 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:06 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:09:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:09:10 localhost systemd[1]: tmp-crun.zIe9Tj.mount: Deactivated successfully. Dec 5 04:09:10 localhost podman[106090]: 2025-12-05 09:09:10.696333453 +0000 UTC m=+0.083312992 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com) Dec 5 04:09:10 localhost podman[106090]: 2025-12-05 09:09:10.891414292 +0000 UTC m=+0.278393811 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, config_id=tripleo_step1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:09:10 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:09:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4221 DF PROTO=TCP SPT=39436 DPT=9102 SEQ=2450350039 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36A3460000000001030307) Dec 5 04:09:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4222 DF PROTO=TCP SPT=39436 DPT=9102 SEQ=2450350039 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36A7450000000001030307) Dec 5 04:09:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22516 DF PROTO=TCP SPT=53580 DPT=9100 SEQ=2292440385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36AF260000000001030307) Dec 5 04:09:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22518 DF PROTO=TCP SPT=53580 DPT=9100 SEQ=2292440385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36BB450000000001030307) Dec 5 04:09:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32425 DF PROTO=TCP SPT=38460 DPT=9101 SEQ=3198202159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36C7050000000001030307) Dec 5 04:09:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35941 DF PROTO=TCP SPT=57586 DPT=9101 SEQ=708683241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36D2450000000001030307) Dec 5 04:09:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32427 DF PROTO=TCP SPT=38460 DPT=9101 SEQ=3198202159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36DEC50000000001030307) Dec 5 04:09:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:09:29 localhost podman[106119]: 2025-12-05 09:09:29.468436403 +0000 UTC m=+0.098640968 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z) Dec 5 04:09:29 localhost podman[106119]: 2025-12-05 09:09:29.516701035 +0000 UTC m=+0.146905590 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 5 04:09:29 localhost podman[106119]: unhealthy Dec 5 04:09:29 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:29 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:09:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22520 DF PROTO=TCP SPT=53580 DPT=9100 SEQ=2292440385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36EC450000000001030307) Dec 5 04:09:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:09:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:09:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:09:31 localhost podman[106141]: Error: container 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d is not running Dec 5 04:09:31 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Main process exited, code=exited, status=125/n/a Dec 5 04:09:31 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed with result 'exit-code'. Dec 5 04:09:31 localhost podman[106143]: 2025-12-05 09:09:31.761302727 +0000 UTC m=+0.136897356 container health_status bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Dec 5 04:09:31 localhost podman[106143]: 2025-12-05 09:09:31.77981773 +0000 UTC m=+0.155412369 container exec_died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:09:31 localhost podman[106143]: unhealthy Dec 5 04:09:31 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:31 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 04:09:31 localhost systemd[1]: tmp-crun.l7svUq.mount: Deactivated successfully. Dec 5 04:09:31 localhost podman[106142]: 2025-12-05 09:09:31.87423817 +0000 UTC m=+0.246596610 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, name=rhosp17/openstack-cron, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron) Dec 5 04:09:31 localhost podman[106142]: 2025-12-05 09:09:31.912677622 +0000 UTC m=+0.285036042 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-cron) Dec 5 04:09:31 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:09:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:09:33 localhost systemd[1]: tmp-crun.6ZRUPQ.mount: Deactivated successfully. Dec 5 04:09:33 localhost podman[106193]: 2025-12-05 09:09:33.216006432 +0000 UTC m=+0.101497066 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=) Dec 5 04:09:33 localhost podman[106193]: 2025-12-05 09:09:33.596750701 +0000 UTC m=+0.482241355 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 5 04:09:33 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38475 DF PROTO=TCP SPT=34756 DPT=9105 SEQ=635176137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC36FD450000000001030307) Dec 5 04:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:09:36 localhost podman[106219]: 2025-12-05 09:09:36.970033987 +0000 UTC m=+0.102486967 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, container_name=collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 5 04:09:36 localhost podman[106218]: 2025-12-05 09:09:36.934842683 +0000 UTC m=+0.070635874 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:09:37 localhost systemd[1]: tmp-crun.vQA9Y9.mount: Deactivated successfully. Dec 5 04:09:37 localhost podman[106217]: 2025-12-05 09:09:37.005030383 +0000 UTC m=+0.141135034 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible) Dec 5 04:09:37 localhost podman[106218]: 2025-12-05 09:09:37.017886436 +0000 UTC m=+0.153679657 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, distribution-scope=public) Dec 5 04:09:37 localhost podman[106217]: 2025-12-05 09:09:37.027076716 +0000 UTC m=+0.163181417 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent) Dec 5 04:09:37 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:09:37 localhost podman[106217]: unhealthy Dec 5 04:09:37 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:37 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:09:37 localhost podman[106219]: 2025-12-05 09:09:37.105976241 +0000 UTC m=+0.238429261 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:09:37 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:09:37 localhost podman[106220]: 2025-12-05 09:09:37.159854255 +0000 UTC m=+0.288528279 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:09:37 localhost podman[106220]: 2025-12-05 09:09:37.2016938 +0000 UTC m=+0.330367784 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.buildah.version=1.41.4) Dec 5 04:09:37 localhost podman[106220]: unhealthy Dec 5 04:09:37 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:37 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:09:39 localhost podman[105903]: time="2025-12-05T09:09:39Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Dec 5 04:09:39 localhost systemd[1]: libpod-21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.scope: Deactivated successfully. Dec 5 04:09:39 localhost systemd[1]: libpod-21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.scope: Consumed 6.513s CPU time. Dec 5 04:09:39 localhost podman[105903]: 2025-12-05 09:09:39.106640575 +0000 UTC m=+42.119913713 container died 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true) Dec 5 04:09:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.timer: Deactivated successfully. Dec 5 04:09:39 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d. Dec 5 04:09:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed to open /run/systemd/transient/21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: No such file or directory Dec 5 04:09:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d-userdata-shm.mount: Deactivated successfully. Dec 5 04:09:39 localhost podman[105903]: 2025-12-05 09:09:39.177281319 +0000 UTC m=+42.190554437 container cleanup 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 5 04:09:39 localhost podman[105903]: ceilometer_agent_compute Dec 5 04:09:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.timer: Failed to open /run/systemd/transient/21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.timer: No such file or directory Dec 5 04:09:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed to open /run/systemd/transient/21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: No such file or directory Dec 5 04:09:39 localhost podman[106295]: 2025-12-05 09:09:39.250603314 +0000 UTC m=+0.133897544 container cleanup 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_id=tripleo_step4) Dec 5 04:09:39 localhost systemd[1]: libpod-conmon-21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.scope: Deactivated successfully. Dec 5 04:09:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.timer: Failed to open /run/systemd/transient/21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.timer: No such file or directory Dec 5 04:09:39 localhost systemd[1]: 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: Failed to open /run/systemd/transient/21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d.service: No such file or directory Dec 5 04:09:39 localhost podman[106309]: 2025-12-05 09:09:39.353125401 +0000 UTC m=+0.064076365 container cleanup 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vendor=Red Hat, Inc.) Dec 5 04:09:39 localhost podman[106309]: ceilometer_agent_compute Dec 5 04:09:39 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Dec 5 04:09:39 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 5 04:09:39 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.112s CPU time, no IO. Dec 5 04:09:40 localhost systemd[1]: var-lib-containers-storage-overlay-eb3992ab45cf22014f9d9042a8d9aa0064890f6d40c0b46ba7788c5c2c544ad5-merged.mount: Deactivated successfully. Dec 5 04:09:40 localhost python3.9[106414]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:09:40 localhost systemd[1]: Reloading. Dec 5 04:09:40 localhost systemd-rc-local-generator[106438]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:09:40 localhost systemd-sysv-generator[106445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:09:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:09:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:09:40 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Dec 5 04:09:40 localhost recover_tripleo_nova_virtqemud[106455]: 61294 Dec 5 04:09:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:09:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:09:41 localhost podman[106471]: 2025-12-05 09:09:41.199999783 +0000 UTC m=+0.087373335 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 5 04:09:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6142 DF PROTO=TCP SPT=57022 DPT=9102 SEQ=958870401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3718760000000001030307) Dec 5 04:09:41 localhost podman[106471]: 2025-12-05 09:09:41.37962246 +0000 UTC m=+0.266996062 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd) Dec 5 04:09:41 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:09:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6143 DF PROTO=TCP SPT=57022 DPT=9102 SEQ=958870401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC371C850000000001030307) Dec 5 04:09:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38476 DF PROTO=TCP SPT=34756 DPT=9105 SEQ=635176137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC371E460000000001030307) Dec 5 04:09:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64191 DF PROTO=TCP SPT=39938 DPT=9100 SEQ=846146797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3724570000000001030307) Dec 5 04:09:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64193 DF PROTO=TCP SPT=39938 DPT=9100 SEQ=846146797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3730450000000001030307) Dec 5 04:09:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61681 DF PROTO=TCP SPT=41360 DPT=9101 SEQ=1228740090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC373C450000000001030307) Dec 5 04:09:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=162 DF PROTO=TCP SPT=39752 DPT=9101 SEQ=4182714626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3748460000000001030307) Dec 5 04:09:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61683 DF PROTO=TCP SPT=41360 DPT=9101 SEQ=1228740090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3754060000000001030307) Dec 5 04:09:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:09:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64195 DF PROTO=TCP SPT=39938 DPT=9100 SEQ=846146797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3760450000000001030307) Dec 5 04:09:59 localhost podman[106576]: 2025-12-05 09:09:59.707030766 +0000 UTC m=+0.090016966 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://www.redhat.com, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:09:59 localhost podman[106576]: 2025-12-05 09:09:59.729670236 +0000 UTC m=+0.112656526 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:09:59 localhost podman[106576]: unhealthy Dec 5 04:09:59 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:09:59 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:10:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:10:01 localhost podman[106598]: Error: container bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f is not running Dec 5 04:10:01 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Main process exited, code=exited, status=125/n/a Dec 5 04:10:01 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed with result 'exit-code'. Dec 5 04:10:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:10:02 localhost podman[106611]: 2025-12-05 09:10:02.177019888 +0000 UTC m=+0.067919812 container health_status 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, name=rhosp17/openstack-cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:10:02 localhost podman[106611]: 2025-12-05 09:10:02.186082124 +0000 UTC m=+0.076982088 container exec_died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12) Dec 5 04:10:02 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Deactivated successfully. Dec 5 04:10:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:10:04 localhost podman[106631]: 2025-12-05 09:10:04.163984162 +0000 UTC m=+0.051159901 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24389 DF PROTO=TCP SPT=34548 DPT=9105 SEQ=169206956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3772860000000001030307) Dec 5 04:10:04 localhost podman[106631]: 2025-12-05 09:10:04.54072778 +0000 UTC m=+0.427903299 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 5 04:10:04 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:10:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7530 DF PROTO=TCP SPT=43180 DPT=9882 SEQ=3302341743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3776450000000001030307) Dec 5 04:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:10:07 localhost podman[106654]: 2025-12-05 09:10:07.179163289 +0000 UTC m=+0.067754377 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 04:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:10:07 localhost podman[106655]: 2025-12-05 09:10:07.266330697 +0000 UTC m=+0.149001875 container health_status 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z) Dec 5 04:10:07 localhost podman[106654]: 2025-12-05 09:10:07.272809674 +0000 UTC m=+0.161400802 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4) Dec 5 04:10:07 localhost podman[106654]: unhealthy Dec 5 04:10:07 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:10:07 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:10:07 localhost systemd[1]: tmp-crun.ge91i2.mount: Deactivated successfully. Dec 5 04:10:07 localhost podman[106701]: 2025-12-05 09:10:07.347088019 +0000 UTC m=+0.068584292 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 04:10:07 localhost podman[106655]: 2025-12-05 09:10:07.378996112 +0000 UTC m=+0.261667270 container exec_died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:10:07 localhost podman[106701]: 2025-12-05 09:10:07.392576486 +0000 UTC m=+0.114072729 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4) Dec 5 04:10:07 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Deactivated successfully. Dec 5 04:10:07 localhost podman[106701]: unhealthy Dec 5 04:10:07 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:10:07 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:10:07 localhost podman[106683]: 2025-12-05 09:10:07.461437375 +0000 UTC m=+0.262674969 container health_status 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Dec 5 04:10:07 localhost podman[106683]: 2025-12-05 09:10:07.470352968 +0000 UTC m=+0.271590542 container exec_died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, url=https://www.redhat.com, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:10:07 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Deactivated successfully. Dec 5 04:10:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56781 DF PROTO=TCP SPT=42566 DPT=9102 SEQ=4123257407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC378DA70000000001030307) Dec 5 04:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:10:11 localhost podman[106737]: 2025-12-05 09:10:11.943033434 +0000 UTC m=+0.083620001 container health_status a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container) Dec 5 04:10:12 localhost podman[106737]: 2025-12-05 09:10:12.140566708 +0000 UTC m=+0.281153235 container exec_died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=metrics_qdr, distribution-scope=public, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:10:12 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Deactivated successfully. Dec 5 04:10:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56782 DF PROTO=TCP SPT=42566 DPT=9102 SEQ=4123257407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3791C50000000001030307) Dec 5 04:10:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55561 DF PROTO=TCP SPT=56340 DPT=9100 SEQ=3652089529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3799870000000001030307) Dec 5 04:10:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55563 DF PROTO=TCP SPT=56340 DPT=9100 SEQ=3652089529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37A5850000000001030307) Dec 5 04:10:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44833 DF PROTO=TCP SPT=59004 DPT=9101 SEQ=3347044705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37B1860000000001030307) Dec 5 04:10:22 localhost podman[106457]: time="2025-12-05T09:10:22Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Dec 5 04:10:22 localhost systemd[1]: tmp-crun.Vq1Wpx.mount: Deactivated successfully. Dec 5 04:10:22 localhost systemd[1]: libpod-bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.scope: Deactivated successfully. Dec 5 04:10:22 localhost systemd[1]: libpod-bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.scope: Consumed 6.451s CPU time. Dec 5 04:10:22 localhost podman[106457]: 2025-12-05 09:10:22.687866978 +0000 UTC m=+42.111599174 container died bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 5 04:10:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.timer: Deactivated successfully. Dec 5 04:10:22 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f. Dec 5 04:10:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed to open /run/systemd/transient/bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: No such file or directory Dec 5 04:10:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f-userdata-shm.mount: Deactivated successfully. Dec 5 04:10:22 localhost systemd[1]: var-lib-containers-storage-overlay-d3249738e4d9421a1f001f331e5d9a6df3d763b74a379f4e69853dd9965f5c52-merged.mount: Deactivated successfully. Dec 5 04:10:22 localhost podman[106457]: 2025-12-05 09:10:22.747329351 +0000 UTC m=+42.171061547 container cleanup bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc.) Dec 5 04:10:22 localhost podman[106457]: ceilometer_agent_ipmi Dec 5 04:10:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.timer: Failed to open /run/systemd/transient/bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.timer: No such file or directory Dec 5 04:10:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed to open /run/systemd/transient/bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: No such file or directory Dec 5 04:10:22 localhost podman[106767]: 2025-12-05 09:10:22.776532041 +0000 UTC m=+0.081730343 container cleanup bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:10:22 localhost systemd[1]: libpod-conmon-bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.scope: Deactivated successfully. Dec 5 04:10:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.timer: Failed to open /run/systemd/transient/bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.timer: No such file or directory Dec 5 04:10:22 localhost systemd[1]: bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: Failed to open /run/systemd/transient/bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f.service: No such file or directory Dec 5 04:10:22 localhost podman[106781]: 2025-12-05 09:10:22.876005954 +0000 UTC m=+0.070121749 container cleanup bdd10ad2ed63417a770e6519ee02ca0dad1c2e7d72276ab5a84129da48d2974f (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public) Dec 5 04:10:22 localhost podman[106781]: ceilometer_agent_ipmi Dec 5 04:10:22 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Dec 5 04:10:22 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Dec 5 04:10:22 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Consumed 1.009s CPU time, no IO. Dec 5 04:10:23 localhost python3.9[106884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19689 DF PROTO=TCP SPT=57532 DPT=9882 SEQ=978254149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37BE460000000001030307) Dec 5 04:10:23 localhost systemd[1]: Reloading. Dec 5 04:10:23 localhost systemd-rc-local-generator[106908]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:23 localhost systemd-sysv-generator[106912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:24 localhost systemd[1]: Stopping collectd container... Dec 5 04:10:25 localhost systemd[1]: libpod-33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.scope: Deactivated successfully. Dec 5 04:10:25 localhost systemd[1]: libpod-33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.scope: Consumed 2.144s CPU time. Dec 5 04:10:25 localhost podman[106925]: 2025-12-05 09:10:25.717193505 +0000 UTC m=+1.640212723 container died 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team) Dec 5 04:10:25 localhost systemd[1]: tmp-crun.1PqfOf.mount: Deactivated successfully. Dec 5 04:10:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.timer: Deactivated successfully. Dec 5 04:10:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d. Dec 5 04:10:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Failed to open /run/systemd/transient/33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: No such file or directory Dec 5 04:10:25 localhost podman[106925]: 2025-12-05 09:10:25.783643672 +0000 UTC m=+1.706662840 container cleanup 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044) Dec 5 04:10:25 localhost podman[106925]: collectd Dec 5 04:10:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.timer: Failed to open /run/systemd/transient/33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.timer: No such file or directory Dec 5 04:10:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Failed to open /run/systemd/transient/33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: No such file or directory Dec 5 04:10:25 localhost podman[106938]: 2025-12-05 09:10:25.818147764 +0000 UTC m=+0.089456059 container cleanup 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 5 04:10:25 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:10:25 localhost systemd[1]: libpod-conmon-33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.scope: Deactivated successfully. Dec 5 04:10:25 localhost podman[106969]: error opening file `/run/crun/33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d/status`: No such file or directory Dec 5 04:10:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.timer: Failed to open /run/systemd/transient/33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.timer: No such file or directory Dec 5 04:10:25 localhost systemd[1]: 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: Failed to open /run/systemd/transient/33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d.service: No such file or directory Dec 5 04:10:25 localhost podman[106956]: 2025-12-05 09:10:25.906010462 +0000 UTC m=+0.056970267 container cleanup 33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'da9a0dc7b40588672419e3ce10063e21'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 5 04:10:25 localhost podman[106956]: collectd Dec 5 04:10:25 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Dec 5 04:10:25 localhost systemd[1]: Stopped collectd container. Dec 5 04:10:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44835 DF PROTO=TCP SPT=59004 DPT=9101 SEQ=3347044705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37C9460000000001030307) Dec 5 04:10:26 localhost python3.9[107062]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:26 localhost systemd[1]: Reloading. Dec 5 04:10:26 localhost systemd-rc-local-generator[107089]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:26 localhost systemd-sysv-generator[107094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:26 localhost systemd[1]: var-lib-containers-storage-overlay-b72a19e404f6c13e423229a7127c9516226f0251bfb1ca4d8f5c0d5e27c8d5a3-merged.mount: Deactivated successfully. Dec 5 04:10:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-33eb3433a31472ea91d99f4d12c7f860396c914bb61f2fe2ec2ad87a251aad7d-userdata-shm.mount: Deactivated successfully. Dec 5 04:10:26 localhost systemd[1]: Stopping iscsid container... Dec 5 04:10:27 localhost systemd[1]: libpod-178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.scope: Deactivated successfully. Dec 5 04:10:27 localhost systemd[1]: libpod-178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.scope: Consumed 1.142s CPU time. Dec 5 04:10:27 localhost podman[107103]: 2025-12-05 09:10:27.075587035 +0000 UTC m=+0.066941183 container died 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, release=1761123044) Dec 5 04:10:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.timer: Deactivated successfully. Dec 5 04:10:27 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f. Dec 5 04:10:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Failed to open /run/systemd/transient/178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: No such file or directory Dec 5 04:10:27 localhost podman[107103]: 2025-12-05 09:10:27.124031892 +0000 UTC m=+0.115386040 container cleanup 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid) Dec 5 04:10:27 localhost podman[107103]: iscsid Dec 5 04:10:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.timer: Failed to open /run/systemd/transient/178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.timer: No such file or directory Dec 5 04:10:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Failed to open /run/systemd/transient/178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: No such file or directory Dec 5 04:10:27 localhost podman[107118]: 2025-12-05 09:10:27.165997351 +0000 UTC m=+0.078460713 container cleanup 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true) Dec 5 04:10:27 localhost systemd[1]: libpod-conmon-178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.scope: Deactivated successfully. Dec 5 04:10:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.timer: Failed to open /run/systemd/transient/178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.timer: No such file or directory Dec 5 04:10:27 localhost systemd[1]: 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: Failed to open /run/systemd/transient/178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f.service: No such file or directory Dec 5 04:10:27 localhost podman[107130]: 2025-12-05 09:10:27.25089818 +0000 UTC m=+0.052984646 container cleanup 178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:10:27 localhost podman[107130]: iscsid Dec 5 04:10:27 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Dec 5 04:10:27 localhost systemd[1]: Stopped iscsid container. Dec 5 04:10:27 localhost systemd[1]: var-lib-containers-storage-overlay-11caa1cf464c6437cbae9400b1833815a57da3d6bae8a8f7a0e3f04c2c78d35b-merged.mount: Deactivated successfully. Dec 5 04:10:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-178ed7af5d41566651ee4bbedcd5b921627dd7e1404340a285486956dd589a8f-userdata-shm.mount: Deactivated successfully. Dec 5 04:10:28 localhost python3.9[107234]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:28 localhost systemd[1]: Reloading. Dec 5 04:10:28 localhost systemd-rc-local-generator[107259]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:28 localhost systemd-sysv-generator[107263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:28 localhost systemd[1]: Stopping logrotate_crond container... Dec 5 04:10:28 localhost systemd[1]: libpod-808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.scope: Deactivated successfully. Dec 5 04:10:28 localhost systemd[1]: libpod-808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.scope: Consumed 1.062s CPU time. Dec 5 04:10:28 localhost podman[107275]: 2025-12-05 09:10:28.609561947 +0000 UTC m=+0.080377801 container died 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z) Dec 5 04:10:28 localhost systemd[1]: tmp-crun.XeKIxX.mount: Deactivated successfully. Dec 5 04:10:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.timer: Deactivated successfully. Dec 5 04:10:28 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db. Dec 5 04:10:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Failed to open /run/systemd/transient/808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: No such file or directory Dec 5 04:10:28 localhost podman[107275]: 2025-12-05 09:10:28.670632479 +0000 UTC m=+0.141448323 container cleanup 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, vcs-type=git, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:10:28 localhost podman[107275]: logrotate_crond Dec 5 04:10:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.timer: Failed to open /run/systemd/transient/808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.timer: No such file or directory Dec 5 04:10:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Failed to open /run/systemd/transient/808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: No such file or directory Dec 5 04:10:28 localhost podman[107287]: 2025-12-05 09:10:28.695839778 +0000 UTC m=+0.079661430 container cleanup 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, container_name=logrotate_crond, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git) Dec 5 04:10:28 localhost systemd[1]: libpod-conmon-808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.scope: Deactivated successfully. Dec 5 04:10:28 localhost podman[107318]: error opening file `/run/crun/808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db/status`: No such file or directory Dec 5 04:10:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.timer: Failed to open /run/systemd/transient/808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.timer: No such file or directory Dec 5 04:10:28 localhost systemd[1]: 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: Failed to open /run/systemd/transient/808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db.service: No such file or directory Dec 5 04:10:28 localhost podman[107306]: 2025-12-05 09:10:28.808466572 +0000 UTC m=+0.079615558 container cleanup 808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 5 04:10:28 localhost podman[107306]: logrotate_crond Dec 5 04:10:28 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Dec 5 04:10:28 localhost systemd[1]: Stopped logrotate_crond container. Dec 5 04:10:28 localhost systemd[1]: var-lib-containers-storage-overlay-61edff4e636e5b27fc65056d8e3af9182b499697a247cb1b23f64e881b58c13e-merged.mount: Deactivated successfully. Dec 5 04:10:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-808856e669cf139cb684a2dd0840b39dec5198b983c7c75b055bc4e9e34e99db-userdata-shm.mount: Deactivated successfully. Dec 5 04:10:29 localhost python3.9[107411]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55565 DF PROTO=TCP SPT=56340 DPT=9100 SEQ=3652089529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37D6460000000001030307) Dec 5 04:10:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:10:30 localhost podman[107413]: 2025-12-05 09:10:30.197612639 +0000 UTC m=+0.082514167 container health_status 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 04:10:30 localhost podman[107413]: 2025-12-05 09:10:30.248044646 +0000 UTC m=+0.132946214 container exec_died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 5 04:10:30 localhost podman[107413]: unhealthy Dec 5 04:10:30 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:10:30 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed with result 'exit-code'. Dec 5 04:10:30 localhost systemd[1]: Reloading. Dec 5 04:10:30 localhost systemd-rc-local-generator[107458]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:30 localhost systemd-sysv-generator[107462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:30 localhost systemd[1]: Stopping metrics_qdr container... Dec 5 04:10:31 localhost kernel: qdrouterd[54041]: segfault at 0 ip 00007f41dedb17cb sp 00007fff692d1610 error 4 in libc.so.6[7f41ded4e000+175000] Dec 5 04:10:31 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Dec 5 04:10:31 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Dec 5 04:10:31 localhost systemd[1]: Started Process Core Dump (PID 107486/UID 0). Dec 5 04:10:31 localhost systemd-coredump[107487]: Resource limits disable core dumping for process 54041 (qdrouterd). Dec 5 04:10:31 localhost systemd-coredump[107487]: Process 54041 (qdrouterd) of user 42465 dumped core. Dec 5 04:10:31 localhost systemd[1]: systemd-coredump@0-107486-0.service: Deactivated successfully. Dec 5 04:10:31 localhost podman[107474]: 2025-12-05 09:10:31.196825086 +0000 UTC m=+0.215267565 container died a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 04:10:31 localhost systemd[1]: libpod-a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.scope: Deactivated successfully. Dec 5 04:10:31 localhost systemd[1]: libpod-a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.scope: Consumed 28.725s CPU time. Dec 5 04:10:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.timer: Deactivated successfully. Dec 5 04:10:31 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668. Dec 5 04:10:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Failed to open /run/systemd/transient/a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: No such file or directory Dec 5 04:10:31 localhost systemd[1]: tmp-crun.GZC2ow.mount: Deactivated successfully. Dec 5 04:10:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668-userdata-shm.mount: Deactivated successfully. Dec 5 04:10:31 localhost systemd[1]: var-lib-containers-storage-overlay-0a6ced32e5cb1e91cad73934204d8f5cbbf79aafab7ac712dfe07034e18c0d6e-merged.mount: Deactivated successfully. Dec 5 04:10:31 localhost podman[107474]: 2025-12-05 09:10:31.257539628 +0000 UTC m=+0.275982077 container cleanup a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 5 04:10:31 localhost podman[107474]: metrics_qdr Dec 5 04:10:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.timer: Failed to open /run/systemd/transient/a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.timer: No such file or directory Dec 5 04:10:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Failed to open /run/systemd/transient/a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: No such file or directory Dec 5 04:10:31 localhost podman[107491]: 2025-12-05 09:10:31.270593925 +0000 UTC m=+0.061555118 container cleanup a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_id=tripleo_step1, version=17.1.12, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:10:31 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Dec 5 04:10:31 localhost systemd[1]: libpod-conmon-a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.scope: Deactivated successfully. Dec 5 04:10:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.timer: Failed to open /run/systemd/transient/a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.timer: No such file or directory Dec 5 04:10:31 localhost systemd[1]: a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: Failed to open /run/systemd/transient/a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668.service: No such file or directory Dec 5 04:10:31 localhost podman[107504]: 2025-12-05 09:10:31.348365287 +0000 UTC m=+0.052449301 container cleanup a4ba04db40585851a2374c649db2a6b7acfa95fe387073134bec00fcacab4668 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ec38952662567c94fcd33f4598790a0c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1) Dec 5 04:10:31 localhost podman[107504]: metrics_qdr Dec 5 04:10:31 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Dec 5 04:10:31 localhost systemd[1]: Stopped metrics_qdr container. Dec 5 04:10:32 localhost python3.9[107611]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:33 localhost python3.9[107704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:34 localhost python3.9[107797]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20784 DF PROTO=TCP SPT=43656 DPT=9105 SEQ=316028705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37E7C60000000001030307) Dec 5 04:10:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:10:34 localhost systemd[1]: tmp-crun.Cvfzmf.mount: Deactivated successfully. Dec 5 04:10:34 localhost podman[107891]: 2025-12-05 09:10:34.773575346 +0000 UTC m=+0.088572632 container health_status 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 5 04:10:35 localhost python3.9[107890]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:35 localhost systemd[1]: Reloading. Dec 5 04:10:35 localhost podman[107891]: 2025-12-05 09:10:35.147821486 +0000 UTC m=+0.462818622 container exec_died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, release=1761123044) Dec 5 04:10:35 localhost systemd-rc-local-generator[107943]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:35 localhost systemd-sysv-generator[107946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:35 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Deactivated successfully. Dec 5 04:10:35 localhost systemd[1]: Stopping nova_compute container... Dec 5 04:10:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27886 DF PROTO=TCP SPT=49610 DPT=9882 SEQ=3840583266 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC37EC460000000001030307) Dec 5 04:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:10:37 localhost podman[107968]: 2025-12-05 09:10:37.67982914 +0000 UTC m=+0.072182962 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, io.openshift.expose-services=, tcib_managed=true) Dec 5 04:10:37 localhost podman[107968]: 2025-12-05 09:10:37.697125908 +0000 UTC m=+0.089479750 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 5 04:10:37 localhost podman[107968]: unhealthy Dec 5 04:10:37 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:10:37 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:10:37 localhost podman[107969]: 2025-12-05 09:10:37.747479203 +0000 UTC m=+0.133667576 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc.) Dec 5 04:10:37 localhost podman[107969]: 2025-12-05 09:10:37.76244969 +0000 UTC m=+0.148638043 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public) Dec 5 04:10:37 localhost podman[107969]: unhealthy Dec 5 04:10:37 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:10:37 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:10:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23428 DF PROTO=TCP SPT=57492 DPT=9102 SEQ=3513905194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3802D60000000001030307) Dec 5 04:10:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23429 DF PROTO=TCP SPT=57492 DPT=9102 SEQ=3513905194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3806C50000000001030307) Dec 5 04:10:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51753 DF PROTO=TCP SPT=58462 DPT=9100 SEQ=2659744330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC380EB70000000001030307) Dec 5 04:10:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51755 DF PROTO=TCP SPT=58462 DPT=9100 SEQ=2659744330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC381AC50000000001030307) Dec 5 04:10:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17573 DF PROTO=TCP SPT=57024 DPT=9101 SEQ=2866839696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3826C50000000001030307) Dec 5 04:10:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61686 DF PROTO=TCP SPT=41360 DPT=9101 SEQ=1228740090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3832450000000001030307) Dec 5 04:10:55 localhost systemd[1]: libpod-34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.scope: Deactivated successfully. Dec 5 04:10:55 localhost systemd[1]: libpod-34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.scope: Consumed 37.521s CPU time. Dec 5 04:10:55 localhost systemd[1]: session-c11.scope: Deactivated successfully. Dec 5 04:10:55 localhost podman[107954]: 2025-12-05 09:10:55.410218459 +0000 UTC m=+19.982868620 container died 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=) Dec 5 04:10:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.timer: Deactivated successfully. Dec 5 04:10:55 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c. Dec 5 04:10:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed to open /run/systemd/transient/34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: No such file or directory Dec 5 04:10:55 localhost systemd[1]: var-lib-containers-storage-overlay-bcb409f27dad615b0d7144028021b8876255f5931d4ff2baaeaea67c9909ae2e-merged.mount: Deactivated successfully. Dec 5 04:10:55 localhost podman[107954]: 2025-12-05 09:10:55.483633969 +0000 UTC m=+20.056284100 container cleanup 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:10:55 localhost podman[107954]: nova_compute Dec 5 04:10:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.timer: Failed to open /run/systemd/transient/34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.timer: No such file or directory Dec 5 04:10:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed to open /run/systemd/transient/34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: No such file or directory Dec 5 04:10:55 localhost podman[108084]: 2025-12-05 09:10:55.498047078 +0000 UTC m=+0.078637379 container cleanup 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute) Dec 5 04:10:55 localhost systemd[1]: libpod-conmon-34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.scope: Deactivated successfully. Dec 5 04:10:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.timer: Failed to open /run/systemd/transient/34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.timer: No such file or directory Dec 5 04:10:55 localhost systemd[1]: 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: Failed to open /run/systemd/transient/34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c.service: No such file or directory Dec 5 04:10:55 localhost podman[108097]: 2025-12-05 09:10:55.597136399 +0000 UTC m=+0.071465030 container cleanup 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, tcib_managed=true) Dec 5 04:10:55 localhost podman[108097]: nova_compute Dec 5 04:10:55 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Dec 5 04:10:55 localhost systemd[1]: Stopped nova_compute container. Dec 5 04:10:56 localhost python3.9[108201]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23432 DF PROTO=TCP SPT=57492 DPT=9102 SEQ=3513905194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC383E450000000001030307) Dec 5 04:10:56 localhost systemd[1]: Reloading. Dec 5 04:10:56 localhost systemd-rc-local-generator[108229]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:56 localhost systemd-sysv-generator[108233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:56 localhost systemd[1]: Stopping nova_migration_target container... Dec 5 04:10:56 localhost systemd[1]: libpod-94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.scope: Deactivated successfully. Dec 5 04:10:56 localhost systemd[1]: libpod-94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.scope: Consumed 33.994s CPU time. Dec 5 04:10:56 localhost podman[108242]: 2025-12-05 09:10:56.917811259 +0000 UTC m=+0.081368933 container died 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Dec 5 04:10:56 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.timer: Deactivated successfully. Dec 5 04:10:56 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2. Dec 5 04:10:56 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Failed to open /run/systemd/transient/94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: No such file or directory Dec 5 04:10:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2-userdata-shm.mount: Deactivated successfully. Dec 5 04:10:56 localhost systemd[1]: var-lib-containers-storage-overlay-a1f7fa99b1a308b31b9c9168cd0ceb32b0fd11de133cd67585d173dee9412861-merged.mount: Deactivated successfully. Dec 5 04:10:56 localhost podman[108242]: 2025-12-05 09:10:56.97460153 +0000 UTC m=+0.138159204 container cleanup 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 5 04:10:56 localhost podman[108242]: nova_migration_target Dec 5 04:10:57 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.timer: Failed to open /run/systemd/transient/94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.timer: No such file or directory Dec 5 04:10:57 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Failed to open /run/systemd/transient/94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: No such file or directory Dec 5 04:10:57 localhost podman[108256]: 2025-12-05 09:10:57.018375274 +0000 UTC m=+0.085202398 container cleanup 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z) Dec 5 04:10:57 localhost systemd[1]: libpod-conmon-94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.scope: Deactivated successfully. Dec 5 04:10:57 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.timer: Failed to open /run/systemd/transient/94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.timer: No such file or directory Dec 5 04:10:57 localhost systemd[1]: 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: Failed to open /run/systemd/transient/94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2.service: No such file or directory Dec 5 04:10:57 localhost podman[108272]: 2025-12-05 09:10:57.137527578 +0000 UTC m=+0.080156136 container cleanup 94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 5 04:10:57 localhost podman[108272]: nova_migration_target Dec 5 04:10:57 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Dec 5 04:10:57 localhost systemd[1]: Stopped nova_migration_target container. Dec 5 04:10:57 localhost python3.9[108377]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:10:58 localhost systemd[1]: Reloading. Dec 5 04:10:59 localhost systemd-sysv-generator[108408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:10:59 localhost systemd-rc-local-generator[108402]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:10:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:10:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:10:59 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Dec 5 04:10:59 localhost recover_tripleo_nova_virtqemud[108418]: 61294 Dec 5 04:10:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:10:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:10:59 localhost systemd[1]: libpod-720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3.scope: Deactivated successfully. Dec 5 04:10:59 localhost podman[108419]: 2025-12-05 09:10:59.416168705 +0000 UTC m=+0.076688179 container died 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:10:59 localhost systemd[1]: tmp-crun.ezsVMP.mount: Deactivated successfully. Dec 5 04:10:59 localhost podman[108419]: 2025-12-05 09:10:59.461143766 +0000 UTC m=+0.121663220 container cleanup 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtlogd_wrapper, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 5 04:10:59 localhost podman[108419]: nova_virtlogd_wrapper Dec 5 04:10:59 localhost podman[108432]: 2025-12-05 09:10:59.489946755 +0000 UTC m=+0.066106527 container cleanup 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z) Dec 5 04:10:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51757 DF PROTO=TCP SPT=58462 DPT=9100 SEQ=2659744330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC384A460000000001030307) Dec 5 04:11:00 localhost systemd[1]: var-lib-containers-storage-overlay-a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104-merged.mount: Deactivated successfully. Dec 5 04:11:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3-userdata-shm.mount: Deactivated successfully. Dec 5 04:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27113 DF PROTO=TCP SPT=53486 DPT=9105 SEQ=2973765778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC385D050000000001030307) Dec 5 04:11:05 localhost systemd[1]: Stopping User Manager for UID 0... Dec 5 04:11:05 localhost systemd[83304]: Activating special unit Exit the Session... Dec 5 04:11:05 localhost systemd[83304]: Removed slice User Background Tasks Slice. Dec 5 04:11:05 localhost systemd[83304]: Stopped target Main User Target. Dec 5 04:11:05 localhost systemd[83304]: Stopped target Basic System. Dec 5 04:11:05 localhost systemd[83304]: Stopped target Paths. Dec 5 04:11:05 localhost systemd[83304]: Stopped target Sockets. Dec 5 04:11:05 localhost systemd[83304]: Stopped target Timers. Dec 5 04:11:05 localhost systemd[83304]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 04:11:05 localhost systemd[83304]: Closed D-Bus User Message Bus Socket. Dec 5 04:11:05 localhost systemd[83304]: Stopped Create User's Volatile Files and Directories. Dec 5 04:11:05 localhost systemd[83304]: Removed slice User Application Slice. Dec 5 04:11:05 localhost systemd[83304]: Reached target Shutdown. Dec 5 04:11:05 localhost systemd[83304]: Finished Exit the Session. Dec 5 04:11:05 localhost systemd[83304]: Reached target Exit the Session. Dec 5 04:11:05 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 5 04:11:05 localhost systemd[1]: Stopped User Manager for UID 0. Dec 5 04:11:05 localhost systemd[1]: user@0.service: Consumed 4.405s CPU time, no IO. Dec 5 04:11:05 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 5 04:11:05 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 5 04:11:05 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 5 04:11:05 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 5 04:11:05 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 5 04:11:05 localhost systemd[1]: user-0.slice: Consumed 5.394s CPU time. Dec 5 04:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:11:07 localhost podman[108451]: 2025-12-05 09:11:07.951650033 +0000 UTC m=+0.088759656 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 04:11:07 localhost podman[108451]: 2025-12-05 09:11:07.969638362 +0000 UTC m=+0.106747985 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 04:11:07 localhost podman[108451]: unhealthy Dec 5 04:11:07 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:11:07 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:11:08 localhost podman[108452]: 2025-12-05 09:11:08.059716839 +0000 UTC m=+0.193097759 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z) Dec 5 04:11:08 localhost podman[108452]: 2025-12-05 09:11:08.075737578 +0000 UTC m=+0.209118448 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12) Dec 5 04:11:08 localhost podman[108452]: unhealthy Dec 5 04:11:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:11:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:11:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12815 DF PROTO=TCP SPT=33852 DPT=9102 SEQ=2900210100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3878060000000001030307) Dec 5 04:11:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12816 DF PROTO=TCP SPT=33852 DPT=9102 SEQ=2900210100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC387C050000000001030307) Dec 5 04:11:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23433 DF PROTO=TCP SPT=57492 DPT=9102 SEQ=3513905194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC387E4B0000000001030307) Dec 5 04:11:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58039 DF PROTO=TCP SPT=36564 DPT=9100 SEQ=1502327887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3883E60000000001030307) Dec 5 04:11:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58041 DF PROTO=TCP SPT=36564 DPT=9100 SEQ=1502327887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3890060000000001030307) Dec 5 04:11:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59175 DF PROTO=TCP SPT=54414 DPT=9101 SEQ=1259221727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC389BC50000000001030307) Dec 5 04:11:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44838 DF PROTO=TCP SPT=59004 DPT=9101 SEQ=3347044705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38A8450000000001030307) Dec 5 04:11:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59177 DF PROTO=TCP SPT=54414 DPT=9101 SEQ=1259221727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38B3860000000001030307) Dec 5 04:11:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58043 DF PROTO=TCP SPT=36564 DPT=9100 SEQ=1502327887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38C0460000000001030307) Dec 5 04:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11786 DF PROTO=TCP SPT=37508 DPT=9105 SEQ=3076291533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38D2050000000001030307) Dec 5 04:11:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46122 DF PROTO=TCP SPT=54140 DPT=9882 SEQ=2863034231 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38D6450000000001030307) Dec 5 04:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:11:38 localhost systemd[1]: tmp-crun.uuI5E2.mount: Deactivated successfully. Dec 5 04:11:38 localhost podman[108490]: 2025-12-05 09:11:38.210551565 +0000 UTC m=+0.093073280 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 5 04:11:38 localhost systemd[1]: tmp-crun.8OuI04.mount: Deactivated successfully. Dec 5 04:11:38 localhost podman[108490]: 2025-12-05 09:11:38.254219236 +0000 UTC m=+0.136740951 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 04:11:38 localhost podman[108490]: unhealthy Dec 5 04:11:38 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:11:38 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:11:38 localhost podman[108489]: 2025-12-05 09:11:38.255208677 +0000 UTC m=+0.137691431 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 5 04:11:38 localhost podman[108489]: 2025-12-05 09:11:38.335409471 +0000 UTC m=+0.217892235 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:11:38 localhost podman[108489]: unhealthy Dec 5 04:11:38 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:11:38 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:11:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21315 DF PROTO=TCP SPT=50132 DPT=9102 SEQ=662626087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38ED370000000001030307) Dec 5 04:11:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21316 DF PROTO=TCP SPT=50132 DPT=9102 SEQ=662626087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38F1450000000001030307) Dec 5 04:11:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33564 DF PROTO=TCP SPT=36008 DPT=9100 SEQ=825475741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC38F9170000000001030307) Dec 5 04:11:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33566 DF PROTO=TCP SPT=36008 DPT=9100 SEQ=825475741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3905050000000001030307) Dec 5 04:11:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26952 DF PROTO=TCP SPT=50950 DPT=9101 SEQ=3347825489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3911060000000001030307) Dec 5 04:11:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17578 DF PROTO=TCP SPT=57024 DPT=9101 SEQ=2866839696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC391C450000000001030307) Dec 5 04:11:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26954 DF PROTO=TCP SPT=50950 DPT=9101 SEQ=3347825489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3928C50000000001030307) Dec 5 04:11:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33568 DF PROTO=TCP SPT=36008 DPT=9100 SEQ=825475741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3934460000000001030307) Dec 5 04:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49779 DF PROTO=TCP SPT=47112 DPT=9105 SEQ=2975454108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3947460000000001030307) Dec 5 04:12:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:12:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:12:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:12:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:12:08 localhost podman[108602]: 2025-12-05 09:12:08.446652189 +0000 UTC m=+0.083670133 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc.) Dec 5 04:12:08 localhost podman[108602]: 2025-12-05 09:12:08.466611127 +0000 UTC m=+0.103629121 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 04:12:08 localhost podman[108603]: 2025-12-05 09:12:08.501111159 +0000 UTC m=+0.132628435 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1) Dec 5 04:12:08 localhost podman[108603]: 2025-12-05 09:12:08.519577262 +0000 UTC m=+0.151094508 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 5 04:12:08 localhost podman[108603]: unhealthy Dec 5 04:12:08 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:12:08 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:12:08 localhost podman[108602]: unhealthy Dec 5 04:12:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:12:08 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:12:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:12:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:12:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14665 DF PROTO=TCP SPT=59588 DPT=9102 SEQ=554178868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3962670000000001030307) Dec 5 04:12:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14666 DF PROTO=TCP SPT=59588 DPT=9102 SEQ=554178868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3966860000000001030307) Dec 5 04:12:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49780 DF PROTO=TCP SPT=47112 DPT=9105 SEQ=2975454108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3968450000000001030307) Dec 5 04:12:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29811 DF PROTO=TCP SPT=47336 DPT=9100 SEQ=3029733224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC396E4E0000000001030307) Dec 5 04:12:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 5 04:12:15 localhost recover_tripleo_nova_virtqemud[108643]: 61294 Dec 5 04:12:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 5 04:12:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 5 04:12:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29813 DF PROTO=TCP SPT=47336 DPT=9100 SEQ=3029733224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC397A450000000001030307) Dec 5 04:12:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3484 DF PROTO=TCP SPT=56902 DPT=9101 SEQ=909076727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3986450000000001030307) Dec 5 04:12:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59180 DF PROTO=TCP SPT=54414 DPT=9101 SEQ=1259221727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3992450000000001030307) Dec 5 04:12:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Dec 5 04:12:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 60529 (conmon) with signal SIGKILL. Dec 5 04:12:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Dec 5 04:12:23 localhost systemd[1]: libpod-conmon-720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3.scope: Deactivated successfully. Dec 5 04:12:23 localhost podman[108656]: error opening file `/run/crun/720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3/status`: No such file or directory Dec 5 04:12:23 localhost podman[108644]: 2025-12-05 09:12:23.687019384 +0000 UTC m=+0.070361337 container cleanup 720afbd97be567d1b81d1287cf23498eea18dcdf104e045f64ee4ffbe4deb9c3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step3, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:12:23 localhost podman[108644]: nova_virtlogd_wrapper Dec 5 04:12:23 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Dec 5 04:12:23 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Dec 5 04:12:24 localhost python3.9[108750]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:12:24 localhost systemd[1]: Reloading. Dec 5 04:12:24 localhost systemd-sysv-generator[108782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:12:24 localhost systemd-rc-local-generator[108779]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:12:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:12:24 localhost systemd[1]: Stopping nova_virtnodedevd container... Dec 5 04:12:24 localhost systemd[1]: libpod-6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895.scope: Deactivated successfully. Dec 5 04:12:24 localhost systemd[1]: libpod-6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895.scope: Consumed 1.544s CPU time. Dec 5 04:12:24 localhost podman[108791]: 2025-12-05 09:12:24.919094841 +0000 UTC m=+0.080861007 container died 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:12:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895-userdata-shm.mount: Deactivated successfully. Dec 5 04:12:24 localhost podman[108791]: 2025-12-05 09:12:24.955208282 +0000 UTC m=+0.116974428 container cleanup 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, container_name=nova_virtnodedevd, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:12:24 localhost podman[108791]: nova_virtnodedevd Dec 5 04:12:25 localhost podman[108805]: 2025-12-05 09:12:25.012956263 +0000 UTC m=+0.075435792 container cleanup 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt) Dec 5 04:12:25 localhost systemd[1]: libpod-conmon-6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895.scope: Deactivated successfully. Dec 5 04:12:25 localhost podman[108833]: error opening file `/run/crun/6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895/status`: No such file or directory Dec 5 04:12:25 localhost podman[108822]: 2025-12-05 09:12:25.124775062 +0000 UTC m=+0.070018706 container cleanup 6473d2cc921d785bfbdb366dc020be73bee7728c7318f35ddeab435e245a2895 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, vcs-type=git, managed_by=tripleo_ansible) Dec 5 04:12:25 localhost podman[108822]: nova_virtnodedevd Dec 5 04:12:25 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Dec 5 04:12:25 localhost systemd[1]: Stopped nova_virtnodedevd container. Dec 5 04:12:25 localhost python3.9[108928]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:12:25 localhost systemd[1]: Reloading. Dec 5 04:12:25 localhost systemd-rc-local-generator[108958]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:12:25 localhost systemd-sysv-generator[108961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:12:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:12:26 localhost systemd[1]: var-lib-containers-storage-overlay-026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca-merged.mount: Deactivated successfully. Dec 5 04:12:26 localhost systemd[1]: Stopping nova_virtproxyd container... Dec 5 04:12:26 localhost systemd[1]: libpod-0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8.scope: Deactivated successfully. Dec 5 04:12:26 localhost podman[108969]: 2025-12-05 09:12:26.311714884 +0000 UTC m=+0.083858778 container died 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_virtproxyd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 5 04:12:26 localhost podman[108969]: 2025-12-05 09:12:26.354132737 +0000 UTC m=+0.126276631 container cleanup 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 5 04:12:26 localhost podman[108969]: nova_virtproxyd Dec 5 04:12:26 localhost podman[108984]: 2025-12-05 09:12:26.401328326 +0000 UTC m=+0.069819550 container cleanup 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, container_name=nova_virtproxyd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, architecture=x86_64) Dec 5 04:12:26 localhost systemd[1]: libpod-conmon-0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8.scope: Deactivated successfully. Dec 5 04:12:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3486 DF PROTO=TCP SPT=56902 DPT=9101 SEQ=909076727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC399E050000000001030307) Dec 5 04:12:26 localhost podman[109010]: error opening file `/run/crun/0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8/status`: No such file or directory Dec 5 04:12:26 localhost podman[108999]: 2025-12-05 09:12:26.498072406 +0000 UTC m=+0.064574320 container cleanup 0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, container_name=nova_virtproxyd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, url=https://www.redhat.com) Dec 5 04:12:26 localhost podman[108999]: nova_virtproxyd Dec 5 04:12:26 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Dec 5 04:12:26 localhost systemd[1]: Stopped nova_virtproxyd container. Dec 5 04:12:27 localhost systemd[1]: var-lib-containers-storage-overlay-cde2cad4a2e03aee34df4c80ea750703b4af125de29c5aeaa81dd19781d8a4c4-merged.mount: Deactivated successfully. Dec 5 04:12:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fe2409e86eb59314b34b89bf969efdfb7076e67a1d5e9c170aa617a013c9be8-userdata-shm.mount: Deactivated successfully. Dec 5 04:12:27 localhost python3.9[109103]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:12:27 localhost systemd[1]: Reloading. Dec 5 04:12:27 localhost systemd-rc-local-generator[109131]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:12:27 localhost systemd-sysv-generator[109135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:12:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:12:27 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Dec 5 04:12:27 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Dec 5 04:12:27 localhost systemd[1]: Stopping nova_virtqemud container... Dec 5 04:12:27 localhost systemd[1]: libpod-86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479.scope: Deactivated successfully. Dec 5 04:12:27 localhost systemd[1]: libpod-86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479.scope: Consumed 2.975s CPU time. Dec 5 04:12:27 localhost podman[109144]: 2025-12-05 09:12:27.721707086 +0000 UTC m=+0.082249299 container died 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtqemud, release=1761123044, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 5 04:12:27 localhost podman[109144]: 2025-12-05 09:12:27.764326326 +0000 UTC m=+0.124868499 container cleanup 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=1761123044, config_id=tripleo_step3, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtqemud, tcib_managed=true, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:12:27 localhost podman[109144]: nova_virtqemud Dec 5 04:12:27 localhost podman[109158]: 2025-12-05 09:12:27.813227857 +0000 UTC m=+0.078452863 container cleanup 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3) Dec 5 04:12:28 localhost systemd[1]: tmp-crun.TYQSPg.mount: Deactivated successfully. Dec 5 04:12:28 localhost systemd[1]: var-lib-containers-storage-overlay-7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f-merged.mount: Deactivated successfully. Dec 5 04:12:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479-userdata-shm.mount: Deactivated successfully. Dec 5 04:12:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29815 DF PROTO=TCP SPT=47336 DPT=9100 SEQ=3029733224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39AA450000000001030307) Dec 5 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47833 DF PROTO=TCP SPT=54208 DPT=9105 SEQ=2439594040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39BCCC0000000001030307) Dec 5 04:12:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44400 DF PROTO=TCP SPT=60188 DPT=9882 SEQ=4291760893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39C0450000000001030307) Dec 5 04:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:12:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:12:38 localhost systemd[1]: tmp-crun.KzXl77.mount: Deactivated successfully. Dec 5 04:12:38 localhost podman[109177]: 2025-12-05 09:12:38.970938417 +0000 UTC m=+0.103144286 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 5 04:12:38 localhost podman[109176]: 2025-12-05 09:12:38.996491996 +0000 UTC m=+0.135362268 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true) Dec 5 04:12:39 localhost podman[109176]: 2025-12-05 09:12:39.009973488 +0000 UTC m=+0.148843750 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, vcs-type=git) Dec 5 04:12:39 localhost podman[109176]: unhealthy Dec 5 04:12:39 localhost podman[109177]: 2025-12-05 09:12:39.018613531 +0000 UTC m=+0.150819370 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 5 04:12:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:12:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:12:39 localhost podman[109177]: unhealthy Dec 5 04:12:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:12:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:12:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39350 DF PROTO=TCP SPT=52004 DPT=9102 SEQ=3359391753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39D7960000000001030307) Dec 5 04:12:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39351 DF PROTO=TCP SPT=52004 DPT=9102 SEQ=3359391753 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39DB850000000001030307) Dec 5 04:12:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9163 DF PROTO=TCP SPT=37402 DPT=9100 SEQ=1268909157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39E3760000000001030307) Dec 5 04:12:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9165 DF PROTO=TCP SPT=37402 DPT=9100 SEQ=1268909157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39EF850000000001030307) Dec 5 04:12:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26897 DF PROTO=TCP SPT=55164 DPT=9101 SEQ=1288775835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC39FB850000000001030307) Dec 5 04:12:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8505 DF PROTO=TCP SPT=40598 DPT=9882 SEQ=3021174014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A08450000000001030307) Dec 5 04:12:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26899 DF PROTO=TCP SPT=55164 DPT=9101 SEQ=1288775835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A13450000000001030307) Dec 5 04:12:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9167 DF PROTO=TCP SPT=37402 DPT=9100 SEQ=1268909157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A20450000000001030307) Dec 5 04:13:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53246 DF PROTO=TCP SPT=47404 DPT=9105 SEQ=267164676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A31C50000000001030307) Dec 5 04:13:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=991 DF PROTO=TCP SPT=57246 DPT=9882 SEQ=1182885507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A36450000000001030307) Dec 5 04:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:13:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:13:09 localhost podman[109294]: 2025-12-05 09:13:09.203835661 +0000 UTC m=+0.086169159 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:13:09 localhost podman[109294]: 2025-12-05 09:13:09.218153827 +0000 UTC m=+0.100487295 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 04:13:09 localhost podman[109294]: unhealthy Dec 5 04:13:09 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:13:09 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:13:09 localhost podman[109295]: 2025-12-05 09:13:09.267818372 +0000 UTC m=+0.144162488 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 5 04:13:09 localhost podman[109295]: 2025-12-05 09:13:09.306338586 +0000 UTC m=+0.182682762 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-type=git) Dec 5 04:13:09 localhost podman[109295]: unhealthy Dec 5 04:13:09 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:13:09 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:13:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29985 DF PROTO=TCP SPT=35830 DPT=9102 SEQ=2462264597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A4CC60000000001030307) Dec 5 04:13:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29986 DF PROTO=TCP SPT=35830 DPT=9102 SEQ=2462264597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A50C50000000001030307) Dec 5 04:13:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13141 DF PROTO=TCP SPT=39432 DPT=9100 SEQ=2375136927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A58A60000000001030307) Dec 5 04:13:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13143 DF PROTO=TCP SPT=39432 DPT=9100 SEQ=2375136927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A64C50000000001030307) Dec 5 04:13:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60057 DF PROTO=TCP SPT=56038 DPT=9101 SEQ=3661172183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A70850000000001030307) Dec 5 04:13:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3489 DF PROTO=TCP SPT=56902 DPT=9101 SEQ=909076727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A7C460000000001030307) Dec 5 04:13:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29989 DF PROTO=TCP SPT=35830 DPT=9102 SEQ=2462264597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A88450000000001030307) Dec 5 04:13:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13145 DF PROTO=TCP SPT=39432 DPT=9100 SEQ=2375136927 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3A94450000000001030307) Dec 5 04:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30920 DF PROTO=TCP SPT=60244 DPT=9105 SEQ=760233795 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AA6C60000000001030307) Dec 5 04:13:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:13:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:13:39 localhost podman[109333]: 2025-12-05 09:13:39.447692322 +0000 UTC m=+0.080035142 container health_status 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 5 04:13:39 localhost podman[109333]: 2025-12-05 09:13:39.464565466 +0000 UTC m=+0.096908326 container exec_died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 5 04:13:39 localhost podman[109334]: 2025-12-05 09:13:39.506419893 +0000 UTC m=+0.134484892 container health_status f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 5 04:13:39 localhost podman[109334]: 2025-12-05 09:13:39.520152601 +0000 UTC m=+0.148217550 container exec_died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:13:39 localhost podman[109333]: unhealthy Dec 5 04:13:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:13:39 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed with result 'exit-code'. Dec 5 04:13:39 localhost podman[109334]: unhealthy Dec 5 04:13:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:13:39 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed with result 'exit-code'. Dec 5 04:13:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8652 DF PROTO=TCP SPT=44180 DPT=9102 SEQ=2254120924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AC1F70000000001030307) Dec 5 04:13:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8653 DF PROTO=TCP SPT=44180 DPT=9102 SEQ=2254120924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AC6050000000001030307) Dec 5 04:13:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30921 DF PROTO=TCP SPT=60244 DPT=9105 SEQ=760233795 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AC6460000000001030307) Dec 5 04:13:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64963 DF PROTO=TCP SPT=33274 DPT=9100 SEQ=281082518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3ACDD70000000001030307) Dec 5 04:13:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64965 DF PROTO=TCP SPT=33274 DPT=9100 SEQ=281082518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AD9C50000000001030307) Dec 5 04:13:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34390 DF PROTO=TCP SPT=42158 DPT=9101 SEQ=3703634137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AE5C50000000001030307) Dec 5 04:13:51 localhost systemd[1]: tripleo_nova_virtqemud.service: State 'stop-sigterm' timed out. Killing. Dec 5 04:13:51 localhost systemd[1]: tripleo_nova_virtqemud.service: Killing process 61290 (conmon) with signal SIGKILL. Dec 5 04:13:51 localhost systemd[1]: tripleo_nova_virtqemud.service: Main process exited, code=killed, status=9/KILL Dec 5 04:13:51 localhost systemd[1]: libpod-conmon-86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479.scope: Deactivated successfully. Dec 5 04:13:51 localhost podman[109445]: error opening file `/run/crun/86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479/status`: No such file or directory Dec 5 04:13:51 localhost podman[109434]: 2025-12-05 09:13:51.953651492 +0000 UTC m=+0.082985261 container cleanup 86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, container_name=nova_virtqemud, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt) Dec 5 04:13:51 localhost podman[109434]: nova_virtqemud Dec 5 04:13:51 localhost systemd[1]: tripleo_nova_virtqemud.service: Failed with result 'timeout'. Dec 5 04:13:51 localhost systemd[1]: Stopped nova_virtqemud container. Dec 5 04:13:52 localhost python3.9[109544]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:13:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26902 DF PROTO=TCP SPT=55164 DPT=9101 SEQ=1288775835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AF2460000000001030307) Dec 5 04:13:53 localhost systemd[1]: Reloading. Dec 5 04:13:53 localhost systemd-sysv-generator[109581]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:13:53 localhost systemd-rc-local-generator[109578]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:13:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:13:54 localhost python3.9[109685]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:13:54 localhost systemd[1]: Reloading. Dec 5 04:13:55 localhost systemd-rc-local-generator[109715]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:13:55 localhost systemd-sysv-generator[109718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:13:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:13:55 localhost systemd[1]: Stopping nova_virtsecretd container... Dec 5 04:13:55 localhost systemd[1]: tmp-crun.CuSFcG.mount: Deactivated successfully. Dec 5 04:13:55 localhost systemd[1]: libpod-37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634.scope: Deactivated successfully. Dec 5 04:13:55 localhost podman[109726]: 2025-12-05 09:13:55.304348089 +0000 UTC m=+0.095022969 container died 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, container_name=nova_virtsecretd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 5 04:13:55 localhost systemd[1]: tmp-crun.Tq8wCg.mount: Deactivated successfully. Dec 5 04:13:55 localhost podman[109726]: 2025-12-05 09:13:55.345905186 +0000 UTC m=+0.136580076 container cleanup 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_virtsecretd, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:13:55 localhost podman[109726]: nova_virtsecretd Dec 5 04:13:55 localhost podman[109742]: 2025-12-05 09:13:55.369180475 +0000 UTC m=+0.055023618 container cleanup 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=nova_virtsecretd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4) Dec 5 04:13:55 localhost systemd[1]: libpod-conmon-37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634.scope: Deactivated successfully. Dec 5 04:13:55 localhost podman[109770]: error opening file `/run/crun/37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634/status`: No such file or directory Dec 5 04:13:55 localhost podman[109757]: 2025-12-05 09:13:55.450888828 +0000 UTC m=+0.052894604 container cleanup 37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, build-date=2025-11-19T00:35:22Z, container_name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 5 04:13:55 localhost podman[109757]: nova_virtsecretd Dec 5 04:13:55 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Dec 5 04:13:55 localhost systemd[1]: Stopped nova_virtsecretd container. Dec 5 04:13:56 localhost python3.9[109863]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:13:56 localhost systemd[1]: Reloading. Dec 5 04:13:56 localhost systemd-rc-local-generator[109892]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:13:56 localhost systemd-sysv-generator[109895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:13:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:13:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34392 DF PROTO=TCP SPT=42158 DPT=9101 SEQ=3703634137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3AFD850000000001030307) Dec 5 04:13:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37f7d44874415e2b058fd625899f4b1eb1bf3376f1edf9c8af3c8ce6a0086634-userdata-shm.mount: Deactivated successfully. Dec 5 04:13:56 localhost systemd[1]: var-lib-containers-storage-overlay-028c080cd241d717276dc7c44b639a13095ff3dca850aefb055c4709f3ccdf0b-merged.mount: Deactivated successfully. Dec 5 04:13:56 localhost systemd[1]: Stopping nova_virtstoraged container... Dec 5 04:13:56 localhost systemd[1]: libpod-77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8.scope: Deactivated successfully. Dec 5 04:13:56 localhost podman[109903]: 2025-12-05 09:13:56.64017156 +0000 UTC m=+0.073270065 container died 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_id=tripleo_step3, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtstoraged, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:13:56 localhost podman[109903]: 2025-12-05 09:13:56.669397651 +0000 UTC m=+0.102496146 container cleanup 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, container_name=nova_virtstoraged, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 5 04:13:56 localhost podman[109903]: nova_virtstoraged Dec 5 04:13:56 localhost podman[109916]: 2025-12-05 09:13:56.710127693 +0000 UTC m=+0.059496205 container cleanup 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, container_name=nova_virtstoraged, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 5 04:13:56 localhost systemd[1]: libpod-conmon-77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8.scope: Deactivated successfully. Dec 5 04:13:56 localhost podman[109945]: error opening file `/run/crun/77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8/status`: No such file or directory Dec 5 04:13:56 localhost podman[109932]: 2025-12-05 09:13:56.786196102 +0000 UTC m=+0.046826108 container cleanup 77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f3fe7c52055154c7f97b988e301af0d7'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtstoraged, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 5 04:13:56 localhost podman[109932]: nova_virtstoraged Dec 5 04:13:56 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Dec 5 04:13:56 localhost systemd[1]: Stopped nova_virtstoraged container. Dec 5 04:13:57 localhost systemd[1]: var-lib-containers-storage-overlay-cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8-merged.mount: Deactivated successfully. Dec 5 04:13:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8-userdata-shm.mount: Deactivated successfully. Dec 5 04:13:57 localhost python3.9[110038]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:13:57 localhost systemd[1]: Reloading. Dec 5 04:13:57 localhost systemd-rc-local-generator[110066]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:13:57 localhost systemd-sysv-generator[110070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:13:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:13:57 localhost systemd[1]: Stopping ovn_controller container... Dec 5 04:13:57 localhost systemd[1]: tmp-crun.L8Oq7S.mount: Deactivated successfully. Dec 5 04:13:57 localhost systemd[1]: libpod-f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.scope: Deactivated successfully. Dec 5 04:13:57 localhost systemd[1]: libpod-f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.scope: Consumed 2.754s CPU time. Dec 5 04:13:57 localhost podman[110079]: 2025-12-05 09:13:57.96972143 +0000 UTC m=+0.076644539 container died f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller) Dec 5 04:13:57 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.timer: Deactivated successfully. Dec 5 04:13:57 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd. Dec 5 04:13:57 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed to open /run/systemd/transient/f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: No such file or directory Dec 5 04:13:57 localhost systemd[1]: tmp-crun.8NkjFm.mount: Deactivated successfully. Dec 5 04:13:58 localhost podman[110079]: 2025-12-05 09:13:58.011957407 +0000 UTC m=+0.118880516 container cleanup f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, version=17.1.12, container_name=ovn_controller, release=1761123044) Dec 5 04:13:58 localhost podman[110079]: ovn_controller Dec 5 04:13:58 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.timer: Failed to open /run/systemd/transient/f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.timer: No such file or directory Dec 5 04:13:58 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed to open /run/systemd/transient/f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: No such file or directory Dec 5 04:13:58 localhost podman[110091]: 2025-12-05 09:13:58.054102632 +0000 UTC m=+0.073075449 container cleanup f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 5 04:13:58 localhost systemd[1]: libpod-conmon-f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.scope: Deactivated successfully. Dec 5 04:13:58 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.timer: Failed to open /run/systemd/transient/f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.timer: No such file or directory Dec 5 04:13:58 localhost systemd[1]: f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: Failed to open /run/systemd/transient/f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd.service: No such file or directory Dec 5 04:13:58 localhost podman[110106]: 2025-12-05 09:13:58.14684771 +0000 UTC m=+0.069450839 container cleanup f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:13:58 localhost podman[110106]: ovn_controller Dec 5 04:13:58 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Dec 5 04:13:58 localhost systemd[1]: Stopped ovn_controller container. Dec 5 04:13:58 localhost systemd[1]: var-lib-containers-storage-overlay-e4cc9caec78c737aaef1b1328c59bbfb368860f96b8588be11088d5655a4f2a7-merged.mount: Deactivated successfully. Dec 5 04:13:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd-userdata-shm.mount: Deactivated successfully. Dec 5 04:13:58 localhost python3.9[110208]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:13:58 localhost systemd[1]: Reloading. Dec 5 04:13:59 localhost systemd-rc-local-generator[110234]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:13:59 localhost systemd-sysv-generator[110238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:13:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:13:59 localhost systemd[1]: Stopping ovn_metadata_agent container... Dec 5 04:13:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64967 DF PROTO=TCP SPT=33274 DPT=9100 SEQ=281082518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B0A450000000001030307) Dec 5 04:14:00 localhost systemd[1]: libpod-0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.scope: Deactivated successfully. Dec 5 04:14:00 localhost systemd[1]: libpod-0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.scope: Consumed 12.010s CPU time. Dec 5 04:14:00 localhost podman[110249]: 2025-12-05 09:14:00.380653151 +0000 UTC m=+1.146905821 container stop 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 5 04:14:00 localhost podman[110249]: 2025-12-05 09:14:00.412139382 +0000 UTC m=+1.178392042 container died 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 04:14:00 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.timer: Deactivated successfully. Dec 5 04:14:00 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e. Dec 5 04:14:00 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed to open /run/systemd/transient/0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: No such file or directory Dec 5 04:14:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e-userdata-shm.mount: Deactivated successfully. Dec 5 04:14:00 localhost systemd[1]: var-lib-containers-storage-overlay-a92a44520f69c9e5a7bcda678bb32e8a18947b10b545d0d9d31f991c2dc6484c-merged.mount: Deactivated successfully. Dec 5 04:14:00 localhost podman[110249]: 2025-12-05 09:14:00.53606057 +0000 UTC m=+1.302313220 container cleanup 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 5 04:14:00 localhost podman[110249]: ovn_metadata_agent Dec 5 04:14:00 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.timer: Failed to open /run/systemd/transient/0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.timer: No such file or directory Dec 5 04:14:00 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed to open /run/systemd/transient/0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: No such file or directory Dec 5 04:14:00 localhost podman[110261]: 2025-12-05 09:14:00.559910206 +0000 UTC m=+0.167714674 container cleanup 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 5 04:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53630 DF PROTO=TCP SPT=34346 DPT=9105 SEQ=860727740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B1C050000000001030307) Dec 5 04:14:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15048 DF PROTO=TCP SPT=46058 DPT=9882 SEQ=3730819750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B20450000000001030307) Dec 5 04:14:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39570 DF PROTO=TCP SPT=40312 DPT=9102 SEQ=1786555027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B37260000000001030307) Dec 5 04:14:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39571 DF PROTO=TCP SPT=40312 DPT=9102 SEQ=1786555027 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B3B460000000001030307) Dec 5 04:14:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51956 DF PROTO=TCP SPT=50526 DPT=9100 SEQ=1449374346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B43060000000001030307) Dec 5 04:14:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51958 DF PROTO=TCP SPT=50526 DPT=9100 SEQ=1449374346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B4F060000000001030307) Dec 5 04:14:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8024 DF PROTO=TCP SPT=42396 DPT=9101 SEQ=3046898554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B5B050000000001030307) Dec 5 04:14:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60062 DF PROTO=TCP SPT=56038 DPT=9101 SEQ=3661172183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B66450000000001030307) Dec 5 04:14:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8026 DF PROTO=TCP SPT=42396 DPT=9101 SEQ=3046898554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B72C50000000001030307) Dec 5 04:14:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51960 DF PROTO=TCP SPT=50526 DPT=9100 SEQ=1449374346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B7E450000000001030307) Dec 5 04:14:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8237 DF PROTO=TCP SPT=60386 DPT=9105 SEQ=3046156318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3B91450000000001030307) Dec 5 04:14:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65059 DF PROTO=TCP SPT=47064 DPT=9102 SEQ=3833722057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BAC570000000001030307) Dec 5 04:14:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65060 DF PROTO=TCP SPT=47064 DPT=9102 SEQ=3833722057 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BB0450000000001030307) Dec 5 04:14:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8238 DF PROTO=TCP SPT=60386 DPT=9105 SEQ=3046156318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BB2450000000001030307) Dec 5 04:14:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45867 DF PROTO=TCP SPT=38446 DPT=9100 SEQ=1321137568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BB8370000000001030307) Dec 5 04:14:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45869 DF PROTO=TCP SPT=38446 DPT=9100 SEQ=1321137568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BC4450000000001030307) Dec 5 04:14:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32224 DF PROTO=TCP SPT=51994 DPT=9101 SEQ=474578986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BD0450000000001030307) Dec 5 04:14:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34395 DF PROTO=TCP SPT=42158 DPT=9101 SEQ=3703634137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BDC460000000001030307) Dec 5 04:14:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32226 DF PROTO=TCP SPT=51994 DPT=9101 SEQ=474578986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BE8060000000001030307) Dec 5 04:14:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45871 DF PROTO=TCP SPT=38446 DPT=9100 SEQ=1321137568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3BF4450000000001030307) Dec 5 04:15:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31970 DF PROTO=TCP SPT=42428 DPT=9105 SEQ=1172226839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C06850000000001030307) Dec 5 04:15:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4696 DF PROTO=TCP SPT=34894 DPT=9882 SEQ=3509492019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C0A450000000001030307) Dec 5 04:15:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64239 DF PROTO=TCP SPT=36012 DPT=9102 SEQ=4179555823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C21860000000001030307) Dec 5 04:15:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64240 DF PROTO=TCP SPT=36012 DPT=9102 SEQ=4179555823 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C25850000000001030307) Dec 5 04:15:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60006 DF PROTO=TCP SPT=50592 DPT=9100 SEQ=4032276359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C2D670000000001030307) Dec 5 04:15:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60008 DF PROTO=TCP SPT=50592 DPT=9100 SEQ=4032276359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C39860000000001030307) Dec 5 04:15:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5780 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=3199527952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C45450000000001030307) Dec 5 04:15:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1018 DF PROTO=TCP SPT=48866 DPT=9882 SEQ=1829641521 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C52450000000001030307) Dec 5 04:15:24 localhost systemd[1]: tripleo_ovn_metadata_agent.service: State 'stop-sigterm' timed out. Killing. Dec 5 04:15:24 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Killing process 68783 (conmon) with signal SIGKILL. Dec 5 04:15:24 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Main process exited, code=killed, status=9/KILL Dec 5 04:15:24 localhost systemd[1]: libpod-conmon-0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.scope: Deactivated successfully. Dec 5 04:15:24 localhost podman[110422]: error opening file `/run/crun/0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e/status`: No such file or directory Dec 5 04:15:24 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.timer: Failed to open /run/systemd/transient/0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.timer: No such file or directory Dec 5 04:15:24 localhost systemd[1]: 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: Failed to open /run/systemd/transient/0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e.service: No such file or directory Dec 5 04:15:24 localhost podman[110409]: 2025-12-05 09:15:24.713100785 +0000 UTC m=+0.088128967 container cleanup 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 5 04:15:24 localhost podman[110409]: ovn_metadata_agent Dec 5 04:15:24 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Failed with result 'timeout'. Dec 5 04:15:24 localhost systemd[1]: Stopped ovn_metadata_agent container. Dec 5 04:15:25 localhost python3.9[110516]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:15:25 localhost systemd[1]: Reloading. Dec 5 04:15:25 localhost systemd-sysv-generator[110547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:15:25 localhost systemd-rc-local-generator[110540]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:15:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:15:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5782 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=3199527952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C5D050000000001030307) Dec 5 04:15:27 localhost python3.9[110645]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:28 localhost python3.9[110737]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:28 localhost python3.9[110829]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:29 localhost python3.9[110921]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:29 localhost python3.9[111013]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60010 DF PROTO=TCP SPT=50592 DPT=9100 SEQ=4032276359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C6A450000000001030307) Dec 5 04:15:30 localhost python3.9[111105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:30 localhost python3.9[111197]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:31 localhost python3.9[111289]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:32 localhost python3.9[111381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:32 localhost python3.9[111473]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:34 localhost python3.9[111565]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57842 DF PROTO=TCP SPT=35714 DPT=9105 SEQ=2434884457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C7B860000000001030307) Dec 5 04:15:34 localhost python3.9[111657]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=361 DF PROTO=TCP SPT=32796 DPT=9882 SEQ=3448434767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C80450000000001030307) Dec 5 04:15:35 localhost python3.9[111749]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:36 localhost python3.9[111841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:36 localhost python3.9[111933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:37 localhost python3.9[112025]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:37 localhost python3.9[112117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:38 localhost python3.9[112209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:38 localhost python3.9[112301]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:39 localhost python3.9[112393]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:40 localhost python3.9[112485]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:40 localhost python3.9[112577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24179 DF PROTO=TCP SPT=34228 DPT=9102 SEQ=4042608108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C96B90000000001030307) Dec 5 04:15:41 localhost python3.9[112669]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:41 localhost python3.9[112761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24180 DF PROTO=TCP SPT=34228 DPT=9102 SEQ=4042608108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3C9AC50000000001030307) Dec 5 04:15:42 localhost python3.9[112853]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:43 localhost python3.9[112945]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:43 localhost python3.9[113037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:44 localhost python3.9[113129]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58020 DF PROTO=TCP SPT=58456 DPT=9100 SEQ=3191050939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CA2970000000001030307) Dec 5 04:15:44 localhost python3.9[113221]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:45 localhost python3.9[113313]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:45 localhost python3.9[113405]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:46 localhost python3.9[113497]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:46 localhost python3.9[113589]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58022 DF PROTO=TCP SPT=58456 DPT=9100 SEQ=3191050939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CAE850000000001030307) Dec 5 04:15:47 localhost python3.9[113681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:48 localhost python3.9[113773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:48 localhost python3.9[113865]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:49 localhost python3.9[113957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:49 localhost python3.9[114049]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22989 DF PROTO=TCP SPT=47878 DPT=9101 SEQ=2447554058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CBA850000000001030307) Dec 5 04:15:50 localhost python3.9[114141]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:51 localhost python3.9[114233]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:51 localhost python3.9[114325]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:52 localhost python3.9[114417]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:15:53 localhost python3.9[114509]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32229 DF PROTO=TCP SPT=51994 DPT=9101 SEQ=474578986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CC6450000000001030307) Dec 5 04:15:54 localhost python3.9[114601]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:15:54 localhost python3.9[114693]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:15:55 localhost systemd[1]: Reloading. Dec 5 04:15:55 localhost systemd-rc-local-generator[114748]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:15:55 localhost systemd-sysv-generator[114752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:15:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:15:55 localhost python3.9[114882]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24183 DF PROTO=TCP SPT=34228 DPT=9102 SEQ=4042608108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CD2450000000001030307) Dec 5 04:15:56 localhost python3.9[114990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:57 localhost python3.9[115083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:57 localhost python3.9[115176]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:58 localhost python3.9[115269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:58 localhost python3.9[115362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:15:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58024 DF PROTO=TCP SPT=58456 DPT=9100 SEQ=3191050939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CDE450000000001030307) Dec 5 04:15:59 localhost python3.9[115455]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:00 localhost python3.9[115548]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:01 localhost python3.9[115641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:01 localhost python3.9[115734]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:02 localhost python3.9[115827]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:03 localhost python3.9[115921]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8062 DF PROTO=TCP SPT=60330 DPT=9105 SEQ=3187314727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3CF0C50000000001030307) Dec 5 04:16:04 localhost python3.9[116014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:05 localhost python3.9[116107]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:05 localhost python3.9[116200]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:06 localhost python3.9[116293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:06 localhost python3.9[116386]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:07 localhost python3.9[116479]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:08 localhost python3.9[116572]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:08 localhost python3.9[116665]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:09 localhost python3.9[116758]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:09 localhost systemd[1]: session-37.scope: Deactivated successfully. Dec 5 04:16:09 localhost systemd[1]: session-37.scope: Consumed 50.340s CPU time. Dec 5 04:16:09 localhost systemd-logind[760]: Session 37 logged out. Waiting for processes to exit. Dec 5 04:16:09 localhost systemd-logind[760]: Removed session 37. Dec 5 04:16:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21920 DF PROTO=TCP SPT=34822 DPT=9102 SEQ=2474961825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D0BE70000000001030307) Dec 5 04:16:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21921 DF PROTO=TCP SPT=34822 DPT=9102 SEQ=2474961825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D10050000000001030307) Dec 5 04:16:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8063 DF PROTO=TCP SPT=60330 DPT=9105 SEQ=3187314727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D10460000000001030307) Dec 5 04:16:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29214 DF PROTO=TCP SPT=53470 DPT=9100 SEQ=1140172351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D17C70000000001030307) Dec 5 04:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29216 DF PROTO=TCP SPT=53470 DPT=9100 SEQ=1140172351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D23C50000000001030307) Dec 5 04:16:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14839 DF PROTO=TCP SPT=56802 DPT=9101 SEQ=1082188997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D2FC50000000001030307) Dec 5 04:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5785 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=3199527952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D3C460000000001030307) Dec 5 04:16:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14841 DF PROTO=TCP SPT=56802 DPT=9101 SEQ=1082188997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D47860000000001030307) Dec 5 04:16:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29218 DF PROTO=TCP SPT=53470 DPT=9100 SEQ=1140172351 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D54450000000001030307) Dec 5 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15165 DF PROTO=TCP SPT=40860 DPT=9105 SEQ=1924804515 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D66060000000001030307) Dec 5 04:16:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54296 DF PROTO=TCP SPT=58912 DPT=9882 SEQ=1890036601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D6A450000000001030307) Dec 5 04:16:35 localhost sshd[116774]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:16:35 localhost systemd-logind[760]: New session 38 of user zuul. Dec 5 04:16:35 localhost systemd[1]: Started Session 38 of User zuul. Dec 5 04:16:36 localhost python3.9[116867]: ansible-ansible.legacy.ping Invoked with data=pong Dec 5 04:16:37 localhost python3.9[116971]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:16:38 localhost python3.9[117063]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:39 localhost python3.9[117156]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:16:40 localhost python3.9[117248]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:16:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15654 DF PROTO=TCP SPT=51814 DPT=9102 SEQ=1554374110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D81160000000001030307) Dec 5 04:16:41 localhost python3.9[117340]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:16:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15655 DF PROTO=TCP SPT=51814 DPT=9102 SEQ=1554374110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D85050000000001030307) Dec 5 04:16:42 localhost python3.9[117413]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926200.9259145-177-190454570578393/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:16:43 localhost python3.9[117505]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:16:44 localhost python3.9[117601]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:16:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=836 DF PROTO=TCP SPT=43966 DPT=9100 SEQ=1267251547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D8CF70000000001030307) Dec 5 04:16:44 localhost python3.9[117693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:16:45 localhost python3.9[117783]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:16:45 localhost network[117800]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:16:45 localhost network[117801]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:16:45 localhost network[117802]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:16:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=838 DF PROTO=TCP SPT=43966 DPT=9100 SEQ=1267251547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3D99050000000001030307) Dec 5 04:16:48 localhost python3.9[117999]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:16:49 localhost python3.9[118089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:16:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25284 DF PROTO=TCP SPT=56020 DPT=9101 SEQ=2235133026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DA5050000000001030307) Dec 5 04:16:50 localhost python3.9[118185]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:16:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22994 DF PROTO=TCP SPT=47878 DPT=9101 SEQ=2447554058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DB0460000000001030307) Dec 5 04:16:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15658 DF PROTO=TCP SPT=51814 DPT=9102 SEQ=1554374110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DBC450000000001030307) Dec 5 04:16:57 localhost podman[118318]: 2025-12-05 09:16:57.366299967 +0000 UTC m=+0.087629776 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, com.redhat.component=rhceph-container, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 5 04:16:57 localhost podman[118318]: 2025-12-05 09:16:57.476905685 +0000 UTC m=+0.198235494 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, io.openshift.expose-services=, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=) Dec 5 04:16:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=840 DF PROTO=TCP SPT=43966 DPT=9100 SEQ=1267251547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DC8450000000001030307) Dec 5 04:17:00 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 5 04:17:00 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 5 04:17:00 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 5 04:17:00 localhost systemd[1]: sshd.service: Consumed 1.470s CPU time. Dec 5 04:17:00 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 5 04:17:00 localhost systemd[1]: Stopping sshd-keygen.target... Dec 5 04:17:00 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:17:00 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:17:00 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:17:01 localhost systemd[1]: Reached target sshd-keygen.target. Dec 5 04:17:01 localhost systemd[1]: Starting OpenSSH server daemon... Dec 5 04:17:01 localhost sshd[118475]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:17:01 localhost systemd[1]: Started OpenSSH server daemon. Dec 5 04:17:01 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 04:17:01 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 04:17:01 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 04:17:01 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 04:17:01 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 04:17:01 localhost systemd[1]: run-r3a87d985a9ab474b84f39a010ab2df08.service: Deactivated successfully. Dec 5 04:17:01 localhost systemd[1]: run-rb83155fe54354bb69178cbf93ce10b0d.service: Deactivated successfully. Dec 5 04:17:02 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 5 04:17:02 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 5 04:17:02 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 5 04:17:02 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 5 04:17:02 localhost systemd[1]: Stopping sshd-keygen.target... Dec 5 04:17:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:17:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:17:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:17:02 localhost systemd[1]: Reached target sshd-keygen.target. Dec 5 04:17:02 localhost systemd[1]: Starting OpenSSH server daemon... Dec 5 04:17:02 localhost sshd[118647]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:17:02 localhost systemd[1]: Started OpenSSH server daemon. Dec 5 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56838 DF PROTO=TCP SPT=41040 DPT=9105 SEQ=1545428106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DDB460000000001030307) Dec 5 04:17:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45340 DF PROTO=TCP SPT=38790 DPT=9102 SEQ=1780518243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DF6460000000001030307) Dec 5 04:17:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45341 DF PROTO=TCP SPT=38790 DPT=9102 SEQ=1780518243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DFA460000000001030307) Dec 5 04:17:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15659 DF PROTO=TCP SPT=51814 DPT=9102 SEQ=1554374110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3DFC450000000001030307) Dec 5 04:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13929 DF PROTO=TCP SPT=54404 DPT=9100 SEQ=165272611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E02270000000001030307) Dec 5 04:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13931 DF PROTO=TCP SPT=54404 DPT=9100 SEQ=165272611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E0E450000000001030307) Dec 5 04:17:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23280 DF PROTO=TCP SPT=55090 DPT=9101 SEQ=883004918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E1A050000000001030307) Dec 5 04:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14844 DF PROTO=TCP SPT=56802 DPT=9101 SEQ=1082188997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E26450000000001030307) Dec 5 04:17:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23282 DF PROTO=TCP SPT=55090 DPT=9101 SEQ=883004918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E31C60000000001030307) Dec 5 04:17:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13933 DF PROTO=TCP SPT=54404 DPT=9100 SEQ=165272611 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E3E470000000001030307) Dec 5 04:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48372 DF PROTO=TCP SPT=33216 DPT=9105 SEQ=4002753174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E50450000000001030307) Dec 5 04:17:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30619 DF PROTO=TCP SPT=37816 DPT=9882 SEQ=1653828828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E54450000000001030307) Dec 5 04:17:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10289 DF PROTO=TCP SPT=56576 DPT=9102 SEQ=2759318914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E6B770000000001030307) Dec 5 04:17:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10290 DF PROTO=TCP SPT=56576 DPT=9102 SEQ=2759318914 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E6F850000000001030307) Dec 5 04:17:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19965 DF PROTO=TCP SPT=41812 DPT=9100 SEQ=1436277241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E77570000000001030307) Dec 5 04:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19967 DF PROTO=TCP SPT=41812 DPT=9100 SEQ=1436277241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E83460000000001030307) Dec 5 04:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31211 DF PROTO=TCP SPT=59456 DPT=9101 SEQ=1298029085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E8F460000000001030307) Dec 5 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17966 DF PROTO=TCP SPT=53328 DPT=9882 SEQ=3528879603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3E9C450000000001030307) Dec 5 04:17:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31213 DF PROTO=TCP SPT=59456 DPT=9101 SEQ=1298029085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EA7050000000001030307) Dec 5 04:17:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19969 DF PROTO=TCP SPT=41812 DPT=9100 SEQ=1436277241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EB4460000000001030307) Dec 5 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50421 DF PROTO=TCP SPT=49732 DPT=9105 SEQ=4127337306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EC5860000000001030307) Dec 5 04:18:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6245 DF PROTO=TCP SPT=44172 DPT=9882 SEQ=1431738334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3ECA450000000001030307) Dec 5 04:18:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54361 DF PROTO=TCP SPT=58826 DPT=9102 SEQ=160629009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EE0A70000000001030307) Dec 5 04:18:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54362 DF PROTO=TCP SPT=58826 DPT=9102 SEQ=160629009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EE4C50000000001030307) Dec 5 04:18:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36975 DF PROTO=TCP SPT=56692 DPT=9100 SEQ=2609940752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EEC870000000001030307) Dec 5 04:18:15 localhost kernel: SELinux: Converting 2754 SID table entries... Dec 5 04:18:15 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:18:15 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:18:15 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:18:15 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:18:15 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:18:15 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:18:15 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36977 DF PROTO=TCP SPT=56692 DPT=9100 SEQ=2609940752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3EF8850000000001030307) Dec 5 04:18:19 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=17 res=1 Dec 5 04:18:19 localhost python3.9[119288]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:18:20 localhost python3.9[119380]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:18:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55435 DF PROTO=TCP SPT=51490 DPT=9101 SEQ=2266144467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F04850000000001030307) Dec 5 04:18:20 localhost python3.9[119453]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926299.9652977-426-140694793343471/.source.fact _original_basename=.jykclrvp follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:18:21 localhost python3.9[119543]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:18:23 localhost python3.9[119641]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:18:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23285 DF PROTO=TCP SPT=55090 DPT=9101 SEQ=883004918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F10450000000001030307) Dec 5 04:18:24 localhost python3.9[119695]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:18:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54365 DF PROTO=TCP SPT=58826 DPT=9102 SEQ=160629009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F1C450000000001030307) Dec 5 04:18:27 localhost systemd[1]: Reloading. Dec 5 04:18:27 localhost systemd-sysv-generator[119730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:18:27 localhost systemd-rc-local-generator[119727]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:18:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:18:27 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 04:18:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36979 DF PROTO=TCP SPT=56692 DPT=9100 SEQ=2609940752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F28450000000001030307) Dec 5 04:18:29 localhost python3.9[119835]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:18:31 localhost python3.9[120074]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Dec 5 04:18:32 localhost python3.9[120166]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Dec 5 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47445 DF PROTO=TCP SPT=37516 DPT=9105 SEQ=1329362324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F3AC50000000001030307) Dec 5 04:18:34 localhost python3.9[120259]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:18:35 localhost python3.9[120351]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Dec 5 04:18:37 localhost python3.9[120443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:18:37 localhost python3.9[120535]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:18:38 localhost python3.9[120608]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926317.3926883-750-18844403772274/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:18:39 localhost python3.9[120700]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:18:41 localhost python3.9[120794]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Dec 5 04:18:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5770 DF PROTO=TCP SPT=45330 DPT=9102 SEQ=773664454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F55D60000000001030307) Dec 5 04:18:41 localhost python3.9[120887]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Dec 5 04:18:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5771 DF PROTO=TCP SPT=45330 DPT=9102 SEQ=773664454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F59C50000000001030307) Dec 5 04:18:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47446 DF PROTO=TCP SPT=37516 DPT=9105 SEQ=1329362324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F5A450000000001030307) Dec 5 04:18:42 localhost python3.9[120980]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 5 04:18:43 localhost python3.9[121078]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Dec 5 04:18:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23440 DF PROTO=TCP SPT=43064 DPT=9100 SEQ=2968192408 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F61B60000000001030307) Dec 5 04:18:44 localhost python3.9[121170]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23442 DF PROTO=TCP SPT=43064 DPT=9100 SEQ=2968192408 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F6DC50000000001030307) Dec 5 04:18:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26038 DF PROTO=TCP SPT=53866 DPT=9101 SEQ=3199194831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F79C60000000001030307) Dec 5 04:18:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31216 DF PROTO=TCP SPT=59456 DPT=9101 SEQ=1298029085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F86460000000001030307) Dec 5 04:18:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26040 DF PROTO=TCP SPT=53866 DPT=9101 SEQ=3199194831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F91850000000001030307) Dec 5 04:18:56 localhost python3.9[121265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:18:58 localhost python3.9[121357]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:18:59 localhost python3.9[121430]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926338.0631502-1023-233334341948723/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:18:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23444 DF PROTO=TCP SPT=43064 DPT=9100 SEQ=2968192408 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3F9E470000000001030307) Dec 5 04:19:01 localhost python3.9[121560]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:19:01 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 5 04:19:01 localhost systemd[1]: Stopped Load Kernel Modules. Dec 5 04:19:01 localhost systemd[1]: Stopping Load Kernel Modules... Dec 5 04:19:01 localhost systemd[1]: Starting Load Kernel Modules... Dec 5 04:19:01 localhost systemd-modules-load[121587]: Module 'msr' is built in Dec 5 04:19:01 localhost systemd[1]: Finished Load Kernel Modules. Dec 5 04:19:02 localhost python3.9[121681]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:19:02 localhost python3.9[121769]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926341.7902813-1092-55605341436686/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:19:03 localhost python3.9[121861]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55027 DF PROTO=TCP SPT=58588 DPT=9105 SEQ=2035134679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FB0060000000001030307) Dec 5 04:19:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12939 DF PROTO=TCP SPT=49004 DPT=9882 SEQ=2035379767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FB4460000000001030307) Dec 5 04:19:08 localhost python3.9[121953]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:19:08 localhost python3.9[122045]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 5 04:19:09 localhost python3.9[122135]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:19:10 localhost python3.9[122227]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:19:10 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 5 04:19:10 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 5 04:19:10 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 5 04:19:10 localhost systemd[1]: tuned.service: Consumed 1.831s CPU time, no IO. Dec 5 04:19:10 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 5 04:19:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31935 DF PROTO=TCP SPT=42184 DPT=9102 SEQ=2341578399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FCB060000000001030307) Dec 5 04:19:11 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 5 04:19:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31936 DF PROTO=TCP SPT=42184 DPT=9102 SEQ=2341578399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FCF060000000001030307) Dec 5 04:19:12 localhost python3.9[122329]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 5 04:19:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38146 DF PROTO=TCP SPT=54562 DPT=9100 SEQ=2543580328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FD6E70000000001030307) Dec 5 04:19:16 localhost python3.9[122421]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:19:16 localhost systemd[1]: Reloading. Dec 5 04:19:16 localhost systemd-sysv-generator[122451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:19:16 localhost systemd-rc-local-generator[122448]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:19:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:19:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38148 DF PROTO=TCP SPT=54562 DPT=9100 SEQ=2543580328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FE3050000000001030307) Dec 5 04:19:17 localhost python3.9[122551]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:19:17 localhost systemd[1]: Reloading. Dec 5 04:19:17 localhost systemd-rc-local-generator[122576]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:19:17 localhost systemd-sysv-generator[122582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:19:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:19:18 localhost python3.9[122681]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:19:19 localhost python3.9[122774]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:19:19 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Dec 5 04:19:20 localhost python3.9[122867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:19:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55827 DF PROTO=TCP SPT=37770 DPT=9101 SEQ=1022776160 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FEEC60000000001030307) Dec 5 04:19:21 localhost python3.9[122966]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:19:22 localhost python3.9[123059]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:19:22 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 5 04:19:22 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 5 04:19:22 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 5 04:19:22 localhost systemd[1]: Starting Apply Kernel Variables... Dec 5 04:19:22 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 5 04:19:22 localhost systemd[1]: Finished Apply Kernel Variables. Dec 5 04:19:23 localhost systemd[1]: session-38.scope: Deactivated successfully. Dec 5 04:19:23 localhost systemd[1]: session-38.scope: Consumed 1min 59.276s CPU time. Dec 5 04:19:23 localhost systemd-logind[760]: Session 38 logged out. Waiting for processes to exit. Dec 5 04:19:23 localhost systemd-logind[760]: Removed session 38. Dec 5 04:19:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55440 DF PROTO=TCP SPT=51490 DPT=9101 SEQ=2266144467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC3FFA460000000001030307) Dec 5 04:19:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31939 DF PROTO=TCP SPT=42184 DPT=9102 SEQ=2341578399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4006450000000001030307) Dec 5 04:19:28 localhost sshd[123080]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:19:28 localhost systemd-logind[760]: New session 39 of user zuul. Dec 5 04:19:28 localhost systemd[1]: Started Session 39 of User zuul. Dec 5 04:19:29 localhost python3.9[123173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:19:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38150 DF PROTO=TCP SPT=54562 DPT=9100 SEQ=2543580328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4012450000000001030307) Dec 5 04:19:30 localhost python3.9[123267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:19:31 localhost python3.9[123363]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:19:32 localhost python3.9[123454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:19:34 localhost python3.9[123550]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17230 DF PROTO=TCP SPT=58168 DPT=9105 SEQ=2470268623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4025050000000001030307) Dec 5 04:19:34 localhost python3.9[123604]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:19:38 localhost python3.9[123698]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:19:40 localhost python3.9[123853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:19:40 localhost python3.9[123945]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:19:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61763 DF PROTO=TCP SPT=53750 DPT=9102 SEQ=1997063273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4040360000000001030307) Dec 5 04:19:41 localhost python3.9[124050]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:19:42 localhost python3.9[124098]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:19:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61764 DF PROTO=TCP SPT=53750 DPT=9102 SEQ=1997063273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4044460000000001030307) Dec 5 04:19:42 localhost python3.9[124190]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:19:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31940 DF PROTO=TCP SPT=42184 DPT=9102 SEQ=2341578399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4046460000000001030307) Dec 5 04:19:43 localhost python3.9[124263]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926382.2119234-323-185878993030192/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:19:44 localhost python3.9[124355]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:19:44 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 5 04:19:44 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 04:19:44 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:19:44 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:19:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42725 DF PROTO=TCP SPT=33628 DPT=9100 SEQ=594911543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC404C160000000001030307) Dec 5 04:19:44 localhost python3.9[124448]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:19:45 localhost python3.9[124540]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:19:45 localhost python3.9[124632]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:19:46 localhost python3.9[124722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42727 DF PROTO=TCP SPT=33628 DPT=9100 SEQ=594911543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4058050000000001030307) Dec 5 04:19:47 localhost python3.9[124816]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:19:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7330 DF PROTO=TCP SPT=35634 DPT=9101 SEQ=3706761444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4064050000000001030307) Dec 5 04:19:51 localhost python3.9[124910]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:19:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26043 DF PROTO=TCP SPT=53866 DPT=9101 SEQ=3199194831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4070460000000001030307) Dec 5 04:19:53 localhost sshd[124913]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:19:55 localhost sshd[125007]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:19:55 localhost python3.9[125006]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:19:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7332 DF PROTO=TCP SPT=35634 DPT=9101 SEQ=3706761444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC407BC50000000001030307) Dec 5 04:19:58 localhost sshd[125011]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:19:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42729 DF PROTO=TCP SPT=33628 DPT=9100 SEQ=594911543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4088450000000001030307) Dec 5 04:20:00 localhost sshd[125111]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:20:00 localhost python3.9[125110]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:20:02 localhost sshd[125145]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27592 DF PROTO=TCP SPT=44114 DPT=9105 SEQ=3244140558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC409A450000000001030307) Dec 5 04:20:04 localhost python3.9[125284]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:20:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58282 DF PROTO=TCP SPT=58766 DPT=9882 SEQ=2809564676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC409E460000000001030307) Dec 5 04:20:08 localhost python3.9[125378]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:20:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63320 DF PROTO=TCP SPT=35670 DPT=9102 SEQ=118156074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40B5660000000001030307) Dec 5 04:20:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63321 DF PROTO=TCP SPT=35670 DPT=9102 SEQ=118156074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40B9850000000001030307) Dec 5 04:20:13 localhost python3.9[125472]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:20:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18217 DF PROTO=TCP SPT=58674 DPT=9100 SEQ=2737054861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40C1470000000001030307) Dec 5 04:20:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18219 DF PROTO=TCP SPT=58674 DPT=9100 SEQ=2737054861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40CD450000000001030307) Dec 5 04:20:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65226 DF PROTO=TCP SPT=38706 DPT=9101 SEQ=2031515384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40D9450000000001030307) Dec 5 04:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44756 DF PROTO=TCP SPT=49542 DPT=9882 SEQ=2790075002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40E6450000000001030307) Dec 5 04:20:25 localhost python3.9[125639]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:20:26 localhost python3.9[125744]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:20:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65228 DF PROTO=TCP SPT=38706 DPT=9101 SEQ=2031515384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40F1050000000001030307) Dec 5 04:20:26 localhost python3.9[125818]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764926425.8644798-722-144766352370923/.source.json _original_basename=.s930bj4i follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:20:27 localhost python3.9[125910]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 5 04:20:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18221 DF PROTO=TCP SPT=58674 DPT=9100 SEQ=2737054861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC40FE460000000001030307) Dec 5 04:20:34 localhost podman[125922]: 2025-12-05 09:20:27.980637525 +0000 UTC m=+0.043491459 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 5 04:20:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34747 DF PROTO=TCP SPT=60242 DPT=9105 SEQ=4250673448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC410F850000000001030307) Dec 5 04:20:35 localhost python3.9[126117]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 5 04:20:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15004 DF PROTO=TCP SPT=57816 DPT=9882 SEQ=3342705977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4114460000000001030307) Dec 5 04:20:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=246 DF PROTO=TCP SPT=49596 DPT=9102 SEQ=698164077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC412A960000000001030307) Dec 5 04:20:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=247 DF PROTO=TCP SPT=49596 DPT=9102 SEQ=698164077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC412E860000000001030307) Dec 5 04:20:42 localhost podman[126131]: 2025-12-05 09:20:35.281368474 +0000 UTC m=+0.029780900 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 04:20:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2639 DF PROTO=TCP SPT=55444 DPT=9100 SEQ=1162093672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4136770000000001030307) Dec 5 04:20:44 localhost python3.9[126334]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 5 04:20:46 localhost podman[126348]: 2025-12-05 09:20:44.576074179 +0000 UTC m=+0.045708956 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Dec 5 04:20:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2641 DF PROTO=TCP SPT=55444 DPT=9100 SEQ=1162093672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4142850000000001030307) Dec 5 04:20:47 localhost python3.9[126509]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 5 04:20:48 localhost podman[126522]: 2025-12-05 09:20:47.506507531 +0000 UTC m=+0.047701598 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 04:20:49 localhost python3.9[126687]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 5 04:20:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3404 DF PROTO=TCP SPT=38968 DPT=9101 SEQ=3024769401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC414E850000000001030307) Dec 5 04:20:53 localhost podman[126699]: 2025-12-05 09:20:49.797769365 +0000 UTC m=+0.044530841 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 5 04:20:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7335 DF PROTO=TCP SPT=35634 DPT=9101 SEQ=3706761444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC415A460000000001030307) Dec 5 04:20:53 localhost python3.9[126874]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 5 04:20:55 localhost podman[126887]: 2025-12-05 09:20:54.04744324 +0000 UTC m=+0.046439810 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 5 04:20:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=250 DF PROTO=TCP SPT=49596 DPT=9102 SEQ=698164077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4166450000000001030307) Dec 5 04:20:56 localhost systemd[1]: session-39.scope: Deactivated successfully. Dec 5 04:20:56 localhost systemd[1]: session-39.scope: Consumed 1min 28.557s CPU time. Dec 5 04:20:56 localhost systemd-logind[760]: Session 39 logged out. Waiting for processes to exit. Dec 5 04:20:56 localhost systemd-logind[760]: Removed session 39. Dec 5 04:20:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2643 DF PROTO=TCP SPT=55444 DPT=9100 SEQ=1162093672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4172450000000001030307) Dec 5 04:21:03 localhost sshd[126997]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:21:03 localhost systemd-logind[760]: New session 40 of user zuul. Dec 5 04:21:03 localhost systemd[1]: Started Session 40 of User zuul. Dec 5 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31404 DF PROTO=TCP SPT=42016 DPT=9105 SEQ=2270154951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4184C50000000001030307) Dec 5 04:21:05 localhost python3.9[127249]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:21:06 localhost python3.9[127456]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Dec 5 04:21:07 localhost python3.9[127549]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:21:08 localhost python3.9[127603]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:21:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1719 DF PROTO=TCP SPT=50490 DPT=9102 SEQ=247471491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC419FC60000000001030307) Dec 5 04:21:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1720 DF PROTO=TCP SPT=50490 DPT=9102 SEQ=247471491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41A3C60000000001030307) Dec 5 04:21:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31405 DF PROTO=TCP SPT=42016 DPT=9105 SEQ=2270154951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41A4450000000001030307) Dec 5 04:21:13 localhost python3.9[127794]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:21:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57002 DF PROTO=TCP SPT=42886 DPT=9100 SEQ=1588534579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41ABA70000000001030307) Dec 5 04:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57004 DF PROTO=TCP SPT=42886 DPT=9100 SEQ=1588534579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41B7C50000000001030307) Dec 5 04:21:18 localhost python3.9[128095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:21:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38676 DF PROTO=TCP SPT=56062 DPT=9101 SEQ=2988710280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41C3850000000001030307) Dec 5 04:21:20 localhost python3.9[128188]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:21:21 localhost python3.9[128280]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Dec 5 04:21:23 localhost kernel: SELinux: Converting 2756 SID table entries... Dec 5 04:21:23 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:21:23 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:21:23 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:21:23 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:21:23 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:21:23 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:21:23 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65231 DF PROTO=TCP SPT=38706 DPT=9101 SEQ=2031515384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41D0450000000001030307) Dec 5 04:21:24 localhost python3.9[128421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:21:25 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=18 res=1 Dec 5 04:21:25 localhost python3.9[128519]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:21:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38678 DF PROTO=TCP SPT=56062 DPT=9101 SEQ=2988710280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41DB460000000001030307) Dec 5 04:21:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57006 DF PROTO=TCP SPT=42886 DPT=9100 SEQ=1588534579 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41E8450000000001030307) Dec 5 04:21:30 localhost python3.9[128613]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:21:31 localhost python3.9[128858]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 04:21:32 localhost python3.9[128948]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:21:33 localhost python3.9[129042]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2490 DF PROTO=TCP SPT=49848 DPT=9105 SEQ=4045418407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41F9C50000000001030307) Dec 5 04:21:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54544 DF PROTO=TCP SPT=49098 DPT=9882 SEQ=95323155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC41FE450000000001030307) Dec 5 04:21:37 localhost python3.9[129136]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:21:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49710 DF PROTO=TCP SPT=35020 DPT=9102 SEQ=439973800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4214F60000000001030307) Dec 5 04:21:41 localhost python3.9[129230]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 5 04:21:41 localhost systemd[1]: Reloading. Dec 5 04:21:41 localhost systemd-rc-local-generator[129261]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:21:41 localhost systemd-sysv-generator[129264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:21:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:21:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49711 DF PROTO=TCP SPT=35020 DPT=9102 SEQ=439973800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4219050000000001030307) Dec 5 04:21:42 localhost python3.9[129362]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:21:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63768 DF PROTO=TCP SPT=47080 DPT=9100 SEQ=498170819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4220D70000000001030307) Dec 5 04:21:44 localhost python3.9[129454]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:45 localhost python3.9[129548]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:46 localhost python3.9[129640]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:47 localhost python3.9[129732]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63770 DF PROTO=TCP SPT=47080 DPT=9100 SEQ=498170819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC422CC50000000001030307) Dec 5 04:21:47 localhost python3.9[129805]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926506.6332695-563-90199231165515/.source _original_basename=.z1y9cv2_ follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:48 localhost python3.9[129897]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:49 localhost python3.9[129989]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Dec 5 04:21:49 localhost python3.9[130081]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56628 DF PROTO=TCP SPT=37634 DPT=9101 SEQ=3051834956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4238C50000000001030307) Dec 5 04:21:50 localhost python3.9[130173]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:21:51 localhost python3.9[130246]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926510.2531831-689-74305719803954/.source.yaml _original_basename=.ezb02ex_ follow=False checksum=0cadac3cfc033a4e07cfac59b43f6459e787700a force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:52 localhost python3.9[130338]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Dec 5 04:21:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3409 DF PROTO=TCP SPT=38968 DPT=9101 SEQ=3024769401 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4244460000000001030307) Dec 5 04:21:53 localhost ansible-async_wrapper.py[130443]: Invoked with j890064740524 300 /home/zuul/.ansible/tmp/ansible-tmp-1764926512.6081612-761-13187477425167/AnsiballZ_edpm_os_net_config.py _ Dec 5 04:21:53 localhost ansible-async_wrapper.py[130446]: Starting module and watcher Dec 5 04:21:53 localhost ansible-async_wrapper.py[130446]: Start watching 130447 (300) Dec 5 04:21:53 localhost ansible-async_wrapper.py[130447]: Start module (130447) Dec 5 04:21:53 localhost ansible-async_wrapper.py[130443]: Return async_wrapper task started. Dec 5 04:21:53 localhost python3.9[130448]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Dec 5 04:21:54 localhost ansible-async_wrapper.py[130447]: Module complete (130447) Dec 5 04:21:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49714 DF PROTO=TCP SPT=35020 DPT=9102 SEQ=439973800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4250450000000001030307) Dec 5 04:21:57 localhost python3.9[130540]: ansible-ansible.legacy.async_status Invoked with jid=j890064740524.130443 mode=status _async_dir=/root/.ansible_async Dec 5 04:21:57 localhost python3.9[130599]: ansible-ansible.legacy.async_status Invoked with jid=j890064740524.130443 mode=cleanup _async_dir=/root/.ansible_async Dec 5 04:21:58 localhost python3.9[130691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:21:58 localhost ansible-async_wrapper.py[130446]: Done in kid B. Dec 5 04:21:58 localhost python3.9[130764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926517.9919639-827-135512698214245/.source.returncode _original_basename=.7wuf2u1l follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:21:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63772 DF PROTO=TCP SPT=47080 DPT=9100 SEQ=498170819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC425C450000000001030307) Dec 5 04:21:59 localhost python3.9[130856]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:22:00 localhost python3.9[130929]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926519.2762687-875-177514444009032/.source.cfg _original_basename=.vq_icwon follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:22:01 localhost python3.9[131021]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:22:01 localhost systemd[1]: Reloading Network Manager... Dec 5 04:22:01 localhost NetworkManager[5960]: [1764926521.1258] audit: op="reload" arg="0" pid=131025 uid=0 result="success" Dec 5 04:22:01 localhost NetworkManager[5960]: [1764926521.1270] config: signal: SIGHUP (no changes from disk) Dec 5 04:22:01 localhost systemd[1]: Reloaded Network Manager. Dec 5 04:22:01 localhost systemd[1]: session-40.scope: Deactivated successfully. Dec 5 04:22:01 localhost systemd[1]: session-40.scope: Consumed 34.763s CPU time. Dec 5 04:22:01 localhost systemd-logind[760]: Session 40 logged out. Waiting for processes to exit. Dec 5 04:22:01 localhost systemd-logind[760]: Removed session 40. Dec 5 04:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54010 DF PROTO=TCP SPT=50260 DPT=9105 SEQ=3313294963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC426F050000000001030307) Dec 5 04:22:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:22:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:22:08 localhost sshd[131040]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:22:08 localhost systemd-logind[760]: New session 41 of user zuul. Dec 5 04:22:08 localhost systemd[1]: Started Session 41 of User zuul. Dec 5 04:22:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:22:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:22:09 localhost python3.9[131133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:22:10 localhost python3.9[131289]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:22:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44853 DF PROTO=TCP SPT=54594 DPT=9102 SEQ=3719349014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC428A260000000001030307) Dec 5 04:22:11 localhost python3.9[131442]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:22:12 localhost systemd[1]: session-41.scope: Deactivated successfully. Dec 5 04:22:12 localhost systemd[1]: session-41.scope: Consumed 2.094s CPU time. Dec 5 04:22:12 localhost systemd-logind[760]: Session 41 logged out. Waiting for processes to exit. Dec 5 04:22:12 localhost systemd-logind[760]: Removed session 41. Dec 5 04:22:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44854 DF PROTO=TCP SPT=54594 DPT=9102 SEQ=3719349014 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC428E450000000001030307) Dec 5 04:22:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49715 DF PROTO=TCP SPT=35020 DPT=9102 SEQ=439973800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4290450000000001030307) Dec 5 04:22:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15092 DF PROTO=TCP SPT=39782 DPT=9100 SEQ=2729621557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4296070000000001030307) Dec 5 04:22:16 localhost sshd[131473]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:22:17 localhost systemd-logind[760]: New session 42 of user zuul. Dec 5 04:22:17 localhost systemd[1]: Started Session 42 of User zuul. Dec 5 04:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15094 DF PROTO=TCP SPT=39782 DPT=9100 SEQ=2729621557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42A2050000000001030307) Dec 5 04:22:18 localhost python3.9[131566]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:22:18 localhost python3.9[131660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:22:20 localhost python3.9[131756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:22:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36991 DF PROTO=TCP SPT=55554 DPT=9101 SEQ=1495821218 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42AE050000000001030307) Dec 5 04:22:20 localhost python3.9[131810]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38681 DF PROTO=TCP SPT=56062 DPT=9101 SEQ=2988710280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42BA450000000001030307) Dec 5 04:22:25 localhost systemd[1]: Starting dnf makecache... Dec 5 04:22:25 localhost dnf[131905]: Updating Subscription Management repositories. Dec 5 04:22:25 localhost python3.9[131904]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:22:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36993 DF PROTO=TCP SPT=55554 DPT=9101 SEQ=1495821218 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42C5C50000000001030307) Dec 5 04:22:26 localhost python3.9[132060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:22:27 localhost dnf[131905]: Metadata cache refreshed recently. Dec 5 04:22:27 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 5 04:22:27 localhost systemd[1]: Finished dnf makecache. Dec 5 04:22:27 localhost systemd[1]: dnf-makecache.service: Consumed 1.954s CPU time. Dec 5 04:22:27 localhost python3.9[132152]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:22:28 localhost python3.9[132256]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:22:28 localhost python3.9[132304]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:22:29 localhost python3.9[132396]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:22:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15096 DF PROTO=TCP SPT=39782 DPT=9100 SEQ=2729621557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42D2460000000001030307) Dec 5 04:22:29 localhost python3.9[132444]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:22:30 localhost python3.9[132536]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:22:31 localhost python3.9[132628]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:22:31 localhost python3.9[132720]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:22:32 localhost python3.9[132812]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 5 04:22:33 localhost python3.9[132904]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37849 DF PROTO=TCP SPT=39804 DPT=9105 SEQ=1363267862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42E4450000000001030307) Dec 5 04:22:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18812 DF PROTO=TCP SPT=53720 DPT=9882 SEQ=2765134627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42E8450000000001030307) Dec 5 04:22:37 localhost python3.9[132998]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:22:38 localhost python3.9[133092]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:22:39 localhost python3.9[133184]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:22:39 localhost python3.9[133276]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:22:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20220 DF PROTO=TCP SPT=45150 DPT=9102 SEQ=2763081686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC42FF570000000001030307) Dec 5 04:22:41 localhost python3.9[133369]: ansible-service_facts Invoked Dec 5 04:22:41 localhost network[133386]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:22:41 localhost network[133387]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:22:41 localhost network[133388]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:22:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20221 DF PROTO=TCP SPT=45150 DPT=9102 SEQ=2763081686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4303460000000001030307) Dec 5 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:22:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22741 DF PROTO=TCP SPT=42752 DPT=9100 SEQ=1870293865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC430B370000000001030307) Dec 5 04:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22743 DF PROTO=TCP SPT=42752 DPT=9100 SEQ=1870293865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4317450000000001030307) Dec 5 04:22:48 localhost python3.9[133709]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:22:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37388 DF PROTO=TCP SPT=46322 DPT=9101 SEQ=117836766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4323450000000001030307) Dec 5 04:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56633 DF PROTO=TCP SPT=37634 DPT=9101 SEQ=3051834956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC432E450000000001030307) Dec 5 04:22:53 localhost python3.9[133803]: ansible-package_facts Invoked with manager=['auto'] strategy=first Dec 5 04:22:55 localhost python3.9[133895]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:22:56 localhost python3.9[133970]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926575.1418266-656-275374529236169/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:22:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37390 DF PROTO=TCP SPT=46322 DPT=9101 SEQ=117836766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC433B050000000001030307) Dec 5 04:22:57 localhost python3.9[134064]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:22:57 localhost python3.9[134139]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926576.5813203-701-268411909626610/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:22:59 localhost python3.9[134233]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:22:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22745 DF PROTO=TCP SPT=42752 DPT=9100 SEQ=1870293865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4348450000000001030307) Dec 5 04:23:00 localhost python3.9[134327]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:23:01 localhost python3.9[134381]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14215 DF PROTO=TCP SPT=60556 DPT=9105 SEQ=82835992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4359860000000001030307) Dec 5 04:23:04 localhost python3.9[134475]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:23:05 localhost python3.9[134529]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:23:05 localhost chronyd[25851]: chronyd exiting Dec 5 04:23:05 localhost systemd[1]: Stopping NTP client/server... Dec 5 04:23:05 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 5 04:23:05 localhost systemd[1]: Stopped NTP client/server. Dec 5 04:23:05 localhost systemd[1]: Starting NTP client/server... Dec 5 04:23:05 localhost chronyd[134537]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 5 04:23:05 localhost chronyd[134537]: Frequency -30.251 +/- 0.201 ppm read from /var/lib/chrony/drift Dec 5 04:23:05 localhost chronyd[134537]: Loaded seccomp filter (level 2) Dec 5 04:23:05 localhost systemd[1]: Started NTP client/server. Dec 5 04:23:06 localhost systemd-logind[760]: Session 42 logged out. Waiting for processes to exit. Dec 5 04:23:06 localhost systemd[1]: session-42.scope: Deactivated successfully. Dec 5 04:23:06 localhost systemd[1]: session-42.scope: Consumed 27.351s CPU time. Dec 5 04:23:06 localhost systemd-logind[760]: Removed session 42. Dec 5 04:23:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37312 DF PROTO=TCP SPT=57014 DPT=9102 SEQ=4220049499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4374870000000001030307) Dec 5 04:23:11 localhost sshd[134554]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:23:11 localhost systemd-logind[760]: New session 43 of user zuul. Dec 5 04:23:11 localhost systemd[1]: Started Session 43 of User zuul. Dec 5 04:23:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37313 DF PROTO=TCP SPT=57014 DPT=9102 SEQ=4220049499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4378860000000001030307) Dec 5 04:23:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14216 DF PROTO=TCP SPT=60556 DPT=9105 SEQ=82835992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC437A450000000001030307) Dec 5 04:23:13 localhost python3.9[134647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:23:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9355 DF PROTO=TCP SPT=38338 DPT=9100 SEQ=2413861233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4380660000000001030307) Dec 5 04:23:14 localhost python3.9[134835]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:14 localhost podman[134891]: Dec 5 04:23:14 localhost podman[134891]: 2025-12-05 09:23:14.827860426 +0000 UTC m=+0.075571124 container create c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_elgamal, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:23:14 localhost systemd[1]: Started libpod-conmon-c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b.scope. Dec 5 04:23:14 localhost systemd[1]: Started libcrun container. Dec 5 04:23:14 localhost podman[134891]: 2025-12-05 09:23:14.795201366 +0000 UTC m=+0.042912094 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:23:14 localhost podman[134891]: 2025-12-05 09:23:14.910069832 +0000 UTC m=+0.157780500 container init c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_elgamal, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, version=7, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Dec 5 04:23:14 localhost podman[134891]: 2025-12-05 09:23:14.917997675 +0000 UTC m=+0.165708383 container start c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_elgamal, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, release=1763362218, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph) Dec 5 04:23:14 localhost podman[134891]: 2025-12-05 09:23:14.918310324 +0000 UTC m=+0.166021012 container attach c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_elgamal, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True) Dec 5 04:23:14 localhost determined_elgamal[134906]: 167 167 Dec 5 04:23:14 localhost systemd[1]: libpod-c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b.scope: Deactivated successfully. Dec 5 04:23:14 localhost podman[134891]: 2025-12-05 09:23:14.923334228 +0000 UTC m=+0.171044906 container died c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_elgamal, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, RELEASE=main) Dec 5 04:23:15 localhost podman[134924]: 2025-12-05 09:23:15.01228682 +0000 UTC m=+0.079934987 container remove c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_elgamal, RELEASE=main, ceph=True, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, release=1763362218) Dec 5 04:23:15 localhost systemd[1]: libpod-conmon-c49ba3ad28c68d1430575563b28cb06a51d2e51f7a4764b1bdd298f5d07c8b0b.scope: Deactivated successfully. Dec 5 04:23:15 localhost podman[134977]: Dec 5 04:23:15 localhost podman[134977]: 2025-12-05 09:23:15.213515789 +0000 UTC m=+0.078802972 container create 1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_lumiere, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:23:15 localhost systemd[1]: Started libpod-conmon-1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd.scope. Dec 5 04:23:15 localhost systemd[1]: Started libcrun container. Dec 5 04:23:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a7256fa55101bd6072ad1b1b45a3fa9f4753bf45cf6230b7131f81d76fc6c3/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 04:23:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a7256fa55101bd6072ad1b1b45a3fa9f4753bf45cf6230b7131f81d76fc6c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:23:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a7256fa55101bd6072ad1b1b45a3fa9f4753bf45cf6230b7131f81d76fc6c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 04:23:15 localhost podman[134977]: 2025-12-05 09:23:15.277675643 +0000 UTC m=+0.142962826 container init 1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_lumiere, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, RELEASE=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:23:15 localhost podman[134977]: 2025-12-05 09:23:15.182282063 +0000 UTC m=+0.047569296 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:23:15 localhost podman[134977]: 2025-12-05 09:23:15.288418682 +0000 UTC m=+0.153705865 container start 1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_lumiere, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7) Dec 5 04:23:15 localhost podman[134977]: 2025-12-05 09:23:15.28867492 +0000 UTC m=+0.153962103 container attach 1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_lumiere, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:23:15 localhost python3.9[135043]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:15 localhost systemd[1]: var-lib-containers-storage-overlay-4b04162ca31509497aa4605efb7d9e36e5e3594e5ef0ef27b06e6beb4a69884a-merged.mount: Deactivated successfully. Dec 5 04:23:15 localhost python3.9[135112]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.zmx92byu recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:16 localhost wonderful_lumiere[135020]: [ Dec 5 04:23:16 localhost wonderful_lumiere[135020]: { Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "available": false, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "ceph_device": false, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "lsm_data": {}, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "lvs": [], Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "path": "/dev/sr0", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "rejected_reasons": [ Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "Insufficient space (<5GB)", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "Has a FileSystem" Dec 5 04:23:16 localhost wonderful_lumiere[135020]: ], Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "sys_api": { Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "actuators": null, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "device_nodes": "sr0", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "human_readable_size": "482.00 KB", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "id_bus": "ata", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "model": "QEMU DVD-ROM", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "nr_requests": "2", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "partitions": {}, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "path": "/dev/sr0", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "removable": "1", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "rev": "2.5+", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "ro": "0", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "rotational": "1", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "sas_address": "", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "sas_device_handle": "", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "scheduler_mode": "mq-deadline", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "sectors": 0, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "sectorsize": "2048", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "size": 493568.0, Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "support_discard": "0", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "type": "disk", Dec 5 04:23:16 localhost wonderful_lumiere[135020]: "vendor": "QEMU" Dec 5 04:23:16 localhost wonderful_lumiere[135020]: } Dec 5 04:23:16 localhost wonderful_lumiere[135020]: } Dec 5 04:23:16 localhost wonderful_lumiere[135020]: ] Dec 5 04:23:16 localhost systemd[1]: libpod-1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd.scope: Deactivated successfully. Dec 5 04:23:16 localhost podman[134977]: 2025-12-05 09:23:16.062953278 +0000 UTC m=+0.928240451 container died 1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_lumiere, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:23:16 localhost systemd[1]: var-lib-containers-storage-overlay-d1a7256fa55101bd6072ad1b1b45a3fa9f4753bf45cf6230b7131f81d76fc6c3-merged.mount: Deactivated successfully. Dec 5 04:23:16 localhost podman[136431]: 2025-12-05 09:23:16.141955697 +0000 UTC m=+0.068734656 container remove 1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_lumiere, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, io.buildah.version=1.41.4, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:23:16 localhost systemd[1]: libpod-conmon-1bec43192b7c0fb98dab9d9e8e85d45d78d8152e731b690d8ce3f6ef522f59fd.scope: Deactivated successfully. Dec 5 04:23:17 localhost python3.9[136522]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:17 localhost auditd[725]: Audit daemon rotating log files Dec 5 04:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9357 DF PROTO=TCP SPT=38338 DPT=9100 SEQ=2413861233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC438C850000000001030307) Dec 5 04:23:17 localhost python3.9[136612]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926596.5560358-143-131171596412119/.source _original_basename=.xve06qnr follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:18 localhost python3.9[136704]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:23:19 localhost python3.9[136796]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:19 localhost python3.9[136869]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926598.6727488-215-259423771318220/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:23:20 localhost python3.9[136961]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63895 DF PROTO=TCP SPT=43862 DPT=9101 SEQ=917599165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4398460000000001030307) Dec 5 04:23:20 localhost python3.9[137034]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926599.7503326-215-92489511033579/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:23:21 localhost python3.9[137126]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:22 localhost python3.9[137218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:22 localhost python3.9[137291]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926601.6163716-326-167778328134208/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:23 localhost python3.9[137383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36996 DF PROTO=TCP SPT=55554 DPT=9101 SEQ=1495821218 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43A4450000000001030307) Dec 5 04:23:23 localhost python3.9[137456]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926602.7373188-371-237731132180336/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:25 localhost python3.9[137548]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:23:25 localhost systemd[1]: Reloading. Dec 5 04:23:25 localhost systemd-sysv-generator[137579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:23:25 localhost systemd-rc-local-generator[137575]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:23:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:23:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63897 DF PROTO=TCP SPT=43862 DPT=9101 SEQ=917599165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43B0050000000001030307) Dec 5 04:23:27 localhost systemd[1]: Reloading. Dec 5 04:23:27 localhost systemd-sysv-generator[137617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:23:27 localhost systemd-rc-local-generator[137614]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:23:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:23:27 localhost systemd[1]: Starting EDPM Container Shutdown... Dec 5 04:23:27 localhost systemd[1]: Finished EDPM Container Shutdown. Dec 5 04:23:28 localhost python3.9[137717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:28 localhost python3.9[137790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926607.871322-440-95952268622926/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9359 DF PROTO=TCP SPT=38338 DPT=9100 SEQ=2413861233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43BC450000000001030307) Dec 5 04:23:30 localhost python3.9[137882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:30 localhost python3.9[137955]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926609.0868711-485-103464315084649/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:32 localhost python3.9[138047]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:23:32 localhost systemd[1]: Reloading. Dec 5 04:23:32 localhost systemd-rc-local-generator[138074]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:23:32 localhost systemd-sysv-generator[138078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:23:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:23:32 localhost systemd[1]: Starting Create netns directory... Dec 5 04:23:32 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 04:23:32 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:23:32 localhost systemd[1]: Finished Create netns directory. Dec 5 04:23:34 localhost python3.9[138179]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6481 DF PROTO=TCP SPT=46718 DPT=9105 SEQ=2845667755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43CE860000000001030307) Dec 5 04:23:34 localhost network[138196]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:23:34 localhost network[138197]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:23:34 localhost network[138198]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:23:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:23:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34235 DF PROTO=TCP SPT=38232 DPT=9102 SEQ=3168542990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43E9B60000000001030307) Dec 5 04:23:41 localhost python3.9[138399]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:41 localhost python3.9[138474]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926620.8955038-608-53158933378563/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34236 DF PROTO=TCP SPT=38232 DPT=9102 SEQ=3168542990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43EDC50000000001030307) Dec 5 04:23:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6482 DF PROTO=TCP SPT=46718 DPT=9105 SEQ=2845667755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43EE460000000001030307) Dec 5 04:23:42 localhost python3.9[138567]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:23:42 localhost systemd[1]: Reloading OpenSSH server daemon... Dec 5 04:23:42 localhost systemd[1]: Reloaded OpenSSH server daemon. Dec 5 04:23:42 localhost sshd[118647]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:23:43 localhost python3.9[138663]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:44 localhost python3.9[138755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16612 DF PROTO=TCP SPT=50162 DPT=9100 SEQ=1505004303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC43F5960000000001030307) Dec 5 04:23:44 localhost python3.9[138828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926623.6888292-701-166062479608688/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:45 localhost python3.9[138920]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 5 04:23:45 localhost systemd[1]: Starting Time & Date Service... Dec 5 04:23:45 localhost systemd[1]: Started Time & Date Service. Dec 5 04:23:46 localhost python3.9[139016]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16614 DF PROTO=TCP SPT=50162 DPT=9100 SEQ=1505004303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4401860000000001030307) Dec 5 04:23:47 localhost python3.9[139108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:48 localhost python3.9[139181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926627.1491904-806-111163275348516/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:49 localhost python3.9[139273]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:49 localhost python3.9[139346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926628.8417785-851-37424625967257/.source.yaml _original_basename=.1c9gbc37 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31161 DF PROTO=TCP SPT=33538 DPT=9101 SEQ=2443773020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC440D860000000001030307) Dec 5 04:23:50 localhost python3.9[139438]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:51 localhost python3.9[139513]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926630.0565288-896-108269160328165/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:51 localhost python3.9[139605]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:23:52 localhost python3.9[139698]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:23:53 localhost python3[139791]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 5 04:23:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56185 DF PROTO=TCP SPT=51950 DPT=9882 SEQ=1703744464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC441A450000000001030307) Dec 5 04:23:54 localhost python3.9[139883]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:54 localhost python3.9[139956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926633.587743-1013-214225119787746/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:55 localhost python3.9[140048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:55 localhost python3.9[140121]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926634.9603825-1058-115246423015964/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34239 DF PROTO=TCP SPT=38232 DPT=9102 SEQ=3168542990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4426450000000001030307) Dec 5 04:23:57 localhost python3.9[140213]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:57 localhost python3.9[140286]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926636.2290926-1103-273168569721464/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:23:59 localhost python3.9[140378]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:23:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16616 DF PROTO=TCP SPT=50162 DPT=9100 SEQ=1505004303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4432450000000001030307) Dec 5 04:23:59 localhost python3.9[140451]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926638.7489424-1148-174283972913054/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:00 localhost python3.9[140543]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:24:01 localhost python3.9[140616]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926640.0767603-1193-90722627035999/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:01 localhost python3.9[140708]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:02 localhost python3.9[140800]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:24:03 localhost python3.9[140895]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:04 localhost python3.9[140988]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:04 localhost python3.9[141080]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:05 localhost python3.9[141172]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 5 04:24:06 localhost python3.9[141265]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 5 04:24:07 localhost systemd[1]: session-43.scope: Deactivated successfully. Dec 5 04:24:07 localhost systemd[1]: session-43.scope: Consumed 27.672s CPU time. Dec 5 04:24:07 localhost systemd-logind[760]: Session 43 logged out. Waiting for processes to exit. Dec 5 04:24:07 localhost systemd-logind[760]: Removed session 43. Dec 5 04:24:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50368 DF PROTO=TCP SPT=55624 DPT=9102 SEQ=3648186726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC445EE60000000001030307) Dec 5 04:24:12 localhost sshd[141281]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:24:12 localhost systemd-logind[760]: New session 44 of user zuul. Dec 5 04:24:12 localhost systemd[1]: Started Session 44 of User zuul. Dec 5 04:24:13 localhost python3.9[141376]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 5 04:24:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52678 DF PROTO=TCP SPT=51702 DPT=9100 SEQ=2597430118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC446AC70000000001030307) Dec 5 04:24:14 localhost python3.9[141468]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:24:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37318 DF PROTO=TCP SPT=57014 DPT=9102 SEQ=4220049499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC446E460000000001030307) Dec 5 04:24:15 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 5 04:24:16 localhost python3.9[141562]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Dec 5 04:24:17 localhost python3.9[141657]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.wjt3lzko follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:24:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9361 DF PROTO=TCP SPT=38338 DPT=9100 SEQ=2413861233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC447A460000000001030307) Dec 5 04:24:18 localhost python3.9[141783]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.wjt3lzko mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926656.7967508-189-9032428699478/.source.wjt3lzko _original_basename=.u3mp39p6 follow=False checksum=d59cece82d3bbdcbe4933f17a77de42369988983 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64256 DF PROTO=TCP SPT=32822 DPT=9101 SEQ=3972320168 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC447EB20000000001030307) Dec 5 04:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45030 DF PROTO=TCP SPT=52498 DPT=9882 SEQ=2170644247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4480AB0000000001030307) Dec 5 04:24:21 localhost python3.9[141902]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:24:22 localhost python3.9[141994]: ansible-ansible.builtin.blockinfile Invoked with block=np0005546418.localdomain,192.168.122.105,np0005546418* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD9S5Z7rzST5j/fEC81CBzjbVnN/b1iPQZ35oKFbDVSZ3xrScwTjVDnymCRMpkG7ZjaGvyyMSy6sRwzcBVzWZGF94EKpFeYMdUdfpsK2dbevK8wHAAm7cfqUZ5sgTKGF4TOZZ08RJZ9Xc1fGGKeE0bg2QCqoKA7YzWR++lzm/LXf8DTXUhBN+xvwQ3rVN4Y8AIlXB2YS/FAkc2s3u95spaTjW0hbNonz/q6QiuuElDTfezQ9IkzHyYOFqIxYRnttkUuXTp5FodFYAlU3VOLHCoI6tZQk2f1Kt1ZZX4Umqd2RA4zu0IBbblyns+2Jy/Jg5MuKEZSC5X2xQ/tUeClu2+ZHxwKRMxnwAgAiYuC5ryGQuyc0vphUN3uE6JIxKd+8YgAscYSYvc7VoWqodvvt8eIxoXCDh1XbIsKKbWqosjwoNWAoNZUh+LcHIDskM+7FNALGudbtKgKRazoMRvGbZPWQr8FB2eTWiqo2TOBwHArzAXZmnKcg+ad9eMQtW6PX0M=#012np0005546418.localdomain,192.168.122.105,np0005546418* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID6QeDYhauphPCMjY2u0ByifchL6qXHZv2fvWO3OCIzj#012np0005546418.localdomain,192.168.122.105,np0005546418* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAOZfmYNbsgnO+sF3RpydJnHhRBeoml8dGRqN2Azszo4+xVxpgaJM8FuFbX8uQ8UvFhzziQ2hitNeW+ljaSZAAI=#012np0005546415.localdomain,192.168.122.103,np0005546415* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDkD6dMrlstq/08/i19MSGJhEADExfxigVjJJQ88FcvZHbzGOgQVpolfx1koKTyWN+Arobw6wFmJvZLTo8Bb6WoVTK7S5Ea1OnfJHT61JMRl/WjdLjR5dZtwV62H6dAQuwXtLXjjbx/PIaHGhjGeQ3mAmwEgTU06ey152S+ChTCN3ft7vCFw4DHXAly+guOSgi5JGOb3gMATYrMGVu90ONPr0mfPn6T6oBZQPEWvdKFCulrlj9zVZu7HsSSRQFMxH7KgZJzpkLllA4WVfnGbj38AXD2k/HkyLfYzY27ZsoOL1HyT4ardSL2aUb55JnBNuOxkTcFwxKYlyCL/gWk20rx9nJe7mp5Rl6iK4a8UA5SEKO0sudwL9uZ4JEMNAAViZ+5xpl7M0+YowEMffNSUrVJ8/SSa0beqOu9JTnZ+cEwNCNkJJM/h8ajcjEaAHeRXDkTkujntrvNR6KskZa+g94xtpw1nrG6xl0yzppj6k4nsmcRGGlicsbZEc9SZOW+qaM=#012np0005546415.localdomain,192.168.122.103,np0005546415* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9fcXJw9ebV6QMxfXz7L10O7RAB37asensC545IUkHw#012np0005546415.localdomain,192.168.122.103,np0005546415* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDI+cyJXebg9ZNE0jCet/hwGIodMSGaIYE1ZtXGNSChdyfsTHtwJwqShV/elymR6mzhS+fxVDnoM9id5H9F/Mdg=#012np0005546420.localdomain,192.168.122.107,np0005546420* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI9Y4hTQymJJTi7lwfGVKCetJ5Q4auPNryuYcUqXhqNAkgJUht3nxbV0LL2zw4tBsorx+hqOtHy6QfyMWc4r5hOjGRUOhC2uarhQho1134qkdAt7Wd1XMZFeslg1Vk7F8G5TciLUUJBsqvfKAsGc9/SQS5rWRQ90ssw6RtnrhuCDasOzJIdPA2tYjLQ2emSbjgfd1OuXSpKpSkko9b1cwE6trMzU8G7508xssCoDz66P8kF4Kf+OGT8iWzM8xKE0cB8b50ltkwnrxsK5Hwc8zz9LoLSU01AS9CNm299lqjPgZZhTOu6zSXvN6p4+CylbKvJO19AnMSzMEJZEPoHNCQ2SM+/LxQ9rIH8MAVrpw9SUndYbtXTvUkEsZRYAkH64dyfn+9kcYTPaf/oqkrvxuc6Nlk/uZ79dbjW0Vc+/XJXX9F7hLsdu3PK0kt4oBXIG9B9jdKXVobNiH7lsArspEnZ13zzspPyojH0UV6v0AfaZgCMP8b7Erg9y9+HPradoE=#012np0005546420.localdomain,192.168.122.107,np0005546420* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOTDR+XZw+XJDs6turN9eWht+z1Rm7llRsIRVQhAgcwk#012np0005546420.localdomain,192.168.122.107,np0005546420* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGypM6u5I9iLb0poBkdrsQDwt3k6eMH320BTVkOVz+AbyBbdiE/TH/XKczZXTI77QkP2gP3kP+SvqRlk9KvFc5I=#012np0005546419.localdomain,192.168.122.106,np0005546419* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEuRXji6XnIqsABVq0Qqof5KS4SAvlk4RgdtizNBr7m3ROTYSahE5AJNLCTMugmJtGXewQjNvC8Gcrwjha423XMFi1NpQBCu/U72HR15GJ4x0DRTlvDzeuyqmAuTQEBnQcjNlSIQ4FOJnMjeI6JzpCzCvQ8kOvkGMj6A3Hg/syH7t97g6vL8Cua473lHIav6GTZkm3SmFKQ3Xwj9z3cxUxUnrSgES1zowNRjtoEtPZjSgoF5b8nFIjaQf2ZwMcV0lopTVTvmRVyYDvsR8wFpqMebvWZkW7NQNAaUhRwiYfvQM5/uX1R294FSkW4UiMA5xWT6BMUvtJzexoxZwmrJN3E8I5NLL2KsN33G/6CHA5roanPqECSsRgwyhgQ8bARZgymqoTR9u/p8RRwj7J+x+qJCKMrG+inICVI/o3oOAD2Kdc2rFHXCzdC7sNhjF9/0HPZy8Dt2phAaMcAs4ueBT1Qv/WP22vx3lBguSxEC09rfl+zsp5KAd3jOr9hJBn34E=#012np0005546419.localdomain,192.168.122.106,np0005546419* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAFZUl1WIZPoNWbG6u+CKdVGmLGMNavWtmXvlinpmxXI#012np0005546419.localdomain,192.168.122.106,np0005546419* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIedaj86c0CB1SnGeK8riOelIj4G/rDY87EdW81k0S2GBtRgoE0JgjOLF/vaGYuAwyijqzjSZkpIcPFGAAyqLhc=#012np0005546416.localdomain,192.168.122.104,np0005546416* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBknSonWn2K7oVrigtLeGeXWlaMY1uJqi1743zO2mguB8ceS1WtlyZavpdSnzpqiGiIwguuYuBxNKWaZMI/9XZyZKspYWl5eArdwgxtnKFyHWmHop7/MeX+Y+J7CrfiQ8MajXX1sy1YpxunvdWo7DK3K9DJfTaJ6onr8amsw55w0Pf5HOW0UBGE+AqFmTy/5btxUh4cKFDwRjGeJJps2YFr/p9mdITdZy6sxC+0QCi9XHI7FrpRbYfK0zSSrOBpixOr0sahUWL/3ZUVF5uiJbGTaihxnFrAN3SqoJsWJNJADqmp+E0K3oSw2xsGEvRz02E5n3+GqaYejfpUMdLjvSmTfEKVqlMiL8M0AtBvfeP7KlZCpABiuvopbKIXNsjFfG1HXkFrFHbCgRsfmg7e+8ThU6J66lb2cJhHrtKuP+uePggolCX4bqdv8abdxV9keT+DJCOZ6iMJnDTI8ggTwMTBVwykvMZXIhwiJruh8oACUYaubPkkGSz4VhPIqfSch0=#012np0005546416.localdomain,192.168.122.104,np0005546416* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpX3BRJ2hZsEWilRzk5yd6bYl9erWlONU767dX4uGRx#012np0005546416.localdomain,192.168.122.104,np0005546416* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJJ19D7bhWV35hM8ynnuv7nvdNTRKxzJm7cFU0HyHTG1AIBFi4DwBNR3b2ZvZnp6UJEwHut3lye8XaYCEAf+o+k=#012np0005546421.localdomain,192.168.122.108,np0005546421* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOnO4FOEzvhvnfZvvg9C7oar+ml2He45IxleHN54kwSVAvs2ltf36WvXeS2XAi7WgRxM+SZhG+GxbHWO/u3KqZQXbOWufPkzZF3oMisaK3ZDVZLqKvlrQZf2+29fCEYI9L5zPC/HNP6jqIyDlBSXGYPLQgUjpxxieUICaQ0fIp4WhlqviONuO0ZTwWQdPf5CYPALkVZ74wN1aGPulFSaGYretHzLaUvZvZQVL4q4PRI+7YpxvT1NyDOyTvw5u8TpzZXKp67nFfFtlbX8BvY9f1FVlgzcPwQvxzYWeJy5j9Cv0xoJ56dXmUueau39rhB/CBpKfhymLq91H1nh+F175gPPt5KZA5cfZg7fWlshSRjozK3Z53WpNGrpQtCIjhxblJ5Z3mxAPGcyYYOXoG/iv/IDwMvhkswL2Cqb6/ww6osSP2EJQIjWsS+CoYjynw+g7e++29qN8QiRLOqOuges85TiZ2vxP5lkvs8V3oAF+k4OsPOGPKzibXNDl5PyGwhVU=#012np0005546421.localdomain,192.168.122.108,np0005546421* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFiUBcxudmUyqsqFoMU+JV8/mQ+WLu/s/QM5WUg+IPfZ#012np0005546421.localdomain,192.168.122.108,np0005546421* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJh/II40mXscwERhviKJwwzVT152Sof000TH1wr5DjK1TZmtkBC4tSetb4Sv3lusV4r4bPFittXVnUkY+bspkL8=#012 create=True mode=0644 path=/tmp/ansible.wjt3lzko state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63900 DF PROTO=TCP SPT=43862 DPT=9101 SEQ=917599165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC448E450000000001030307) Dec 5 04:24:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25079 DF PROTO=TCP SPT=57762 DPT=9882 SEQ=1333273070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4490450000000001030307) Dec 5 04:24:24 localhost python3.9[142086]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.wjt3lzko' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:24:25 localhost python3.9[142180]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.wjt3lzko state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:26 localhost systemd-logind[760]: Session 44 logged out. Waiting for processes to exit. Dec 5 04:24:26 localhost systemd[1]: session-44.scope: Deactivated successfully. Dec 5 04:24:26 localhost systemd[1]: session-44.scope: Consumed 4.296s CPU time. Dec 5 04:24:26 localhost systemd-logind[760]: Removed session 44. Dec 5 04:24:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22404 DF PROTO=TCP SPT=56210 DPT=9105 SEQ=361990037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC449D320000000001030307) Dec 5 04:24:33 localhost sshd[142195]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:24:33 localhost systemd-logind[760]: New session 45 of user zuul. Dec 5 04:24:33 localhost systemd[1]: Started Session 45 of User zuul. Dec 5 04:24:34 localhost python3.9[142288]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:24:35 localhost python3.9[142384]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 5 04:24:38 localhost python3.9[142478]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:24:40 localhost python3.9[142571]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:24:40 localhost python3.9[142664]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:24:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37922 DF PROTO=TCP SPT=33318 DPT=9102 SEQ=2892860188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44D4160000000001030307) Dec 5 04:24:42 localhost python3.9[142758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:24:42 localhost python3.9[142853]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:43 localhost systemd[1]: session-45.scope: Deactivated successfully. Dec 5 04:24:43 localhost systemd[1]: session-45.scope: Consumed 3.987s CPU time. Dec 5 04:24:43 localhost systemd-logind[760]: Session 45 logged out. Waiting for processes to exit. Dec 5 04:24:43 localhost systemd-logind[760]: Removed session 45. Dec 5 04:24:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44023 DF PROTO=TCP SPT=45544 DPT=9100 SEQ=2063201690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44DFF60000000001030307) Dec 5 04:24:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44024 DF PROTO=TCP SPT=45544 DPT=9100 SEQ=2063201690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44E4060000000001030307) Dec 5 04:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44025 DF PROTO=TCP SPT=45544 DPT=9100 SEQ=2063201690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44EC050000000001030307) Dec 5 04:24:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56874 DF PROTO=TCP SPT=54910 DPT=9101 SEQ=353152530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44F3E40000000001030307) Dec 5 04:24:49 localhost sshd[142868]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:24:49 localhost systemd-logind[760]: New session 46 of user zuul. Dec 5 04:24:49 localhost systemd[1]: Started Session 46 of User zuul. Dec 5 04:24:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43531 DF PROTO=TCP SPT=35148 DPT=9882 SEQ=1128760805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44F5DB0000000001030307) Dec 5 04:24:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56875 DF PROTO=TCP SPT=54910 DPT=9101 SEQ=353152530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44F8050000000001030307) Dec 5 04:24:50 localhost python3.9[142961]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:24:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43532 DF PROTO=TCP SPT=35148 DPT=9882 SEQ=1128760805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC44F9C50000000001030307) Dec 5 04:24:52 localhost python3.9[143057]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:24:53 localhost python3.9[143111]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 5 04:24:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56877 DF PROTO=TCP SPT=54910 DPT=9101 SEQ=353152530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC450FC50000000001030307) Dec 5 04:24:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43534 DF PROTO=TCP SPT=35148 DPT=9882 SEQ=1128760805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4511850000000001030307) Dec 5 04:24:57 localhost python3.9[143203]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:24:58 localhost python3.9[143296]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:24:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44027 DF PROTO=TCP SPT=45544 DPT=9100 SEQ=2063201690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC451C450000000001030307) Dec 5 04:25:00 localhost python3.9[143388]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:01 localhost python3.9[143480]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:01 localhost python3.9[143570]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:25:02 localhost python3.9[143660]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:25:03 localhost python3.9[143752]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5368 DF PROTO=TCP SPT=53970 DPT=9105 SEQ=241934709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC452E460000000001030307) Dec 5 04:25:04 localhost systemd[1]: session-46.scope: Deactivated successfully. Dec 5 04:25:04 localhost systemd[1]: session-46.scope: Consumed 8.642s CPU time. Dec 5 04:25:04 localhost systemd-logind[760]: Session 46 logged out. Waiting for processes to exit. Dec 5 04:25:04 localhost systemd-logind[760]: Removed session 46. Dec 5 04:25:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43535 DF PROTO=TCP SPT=35148 DPT=9882 SEQ=1128760805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4532450000000001030307) Dec 5 04:25:09 localhost sshd[143767]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:25:10 localhost systemd-logind[760]: New session 47 of user zuul. Dec 5 04:25:10 localhost systemd[1]: Started Session 47 of User zuul. Dec 5 04:25:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4567 DF PROTO=TCP SPT=33008 DPT=9102 SEQ=3675261644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4549460000000001030307) Dec 5 04:25:11 localhost python3.9[143860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:25:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4568 DF PROTO=TCP SPT=33008 DPT=9102 SEQ=3675261644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC454D460000000001030307) Dec 5 04:25:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45244 DF PROTO=TCP SPT=51698 DPT=9100 SEQ=2739543856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4555270000000001030307) Dec 5 04:25:14 localhost python3.9[143956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:15 localhost python3.9[144048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:15 localhost python3.9[144121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926714.5689912-182-78959803697639/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:16 localhost python3.9[144213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:16 localhost chronyd[134537]: Selected source 23.133.168.247 (pool.ntp.org) Dec 5 04:25:17 localhost python3.9[144305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45246 DF PROTO=TCP SPT=51698 DPT=9100 SEQ=2739543856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4561460000000001030307) Dec 5 04:25:17 localhost python3.9[144378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926716.6178465-252-68838366020176/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:18 localhost python3.9[144470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:18 localhost python3.9[144562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:19 localhost python3.9[144635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926718.424412-325-45539135647991/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:20 localhost python3.9[144776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56808 DF PROTO=TCP SPT=35368 DPT=9101 SEQ=210285115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC456D050000000001030307) Dec 5 04:25:20 localhost python3.9[144916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:21 localhost python3.9[145021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926720.3355644-400-147901932029660/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:22 localhost python3.9[145113]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:23 localhost python3.9[145205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:24 localhost python3.9[145278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926723.243895-476-249496425572410/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:24 localhost python3.9[145370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:25 localhost python3.9[145462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:26 localhost python3.9[145535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926725.0893457-548-32374562139615/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56810 DF PROTO=TCP SPT=35368 DPT=9101 SEQ=210285115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4584C50000000001030307) Dec 5 04:25:26 localhost python3.9[145627]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4571 DF PROTO=TCP SPT=33008 DPT=9102 SEQ=3675261644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4586450000000001030307) Dec 5 04:25:27 localhost python3.9[145719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:27 localhost python3.9[145792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926726.8560421-618-34993940206914/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:28 localhost python3.9[145884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:29 localhost python3.9[145976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:29 localhost python3.9[146049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926728.6881068-689-130938391430716/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ab45510108ed0812ae0b7276655988bfcde96505 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45248 DF PROTO=TCP SPT=51698 DPT=9100 SEQ=2739543856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4592450000000001030307) Dec 5 04:25:30 localhost systemd[1]: session-47.scope: Deactivated successfully. Dec 5 04:25:30 localhost systemd[1]: session-47.scope: Consumed 11.823s CPU time. Dec 5 04:25:30 localhost systemd-logind[760]: Session 47 logged out. Waiting for processes to exit. Dec 5 04:25:30 localhost systemd-logind[760]: Removed session 47. Dec 5 04:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49484 DF PROTO=TCP SPT=57160 DPT=9105 SEQ=1565403269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45A3450000000001030307) Dec 5 04:25:35 localhost sshd[146064]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:25:35 localhost systemd-logind[760]: New session 48 of user zuul. Dec 5 04:25:35 localhost systemd[1]: Started Session 48 of User zuul. Dec 5 04:25:36 localhost python3.9[146159]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:37 localhost python3.9[146251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:38 localhost python3.9[146324]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926737.3441195-62-161020482908710/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=56b574bbcbb2378bafed25b3f279b3c007056bbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:39 localhost python3.9[146416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:39 localhost python3.9[146489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926738.7053123-62-41545454615820/.source.conf _original_basename=ceph.conf follow=False checksum=9ddad61351f9bf53ca5a99a509b37f8f58fbf3e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:40 localhost systemd[1]: session-48.scope: Deactivated successfully. Dec 5 04:25:40 localhost systemd[1]: session-48.scope: Consumed 2.173s CPU time. Dec 5 04:25:40 localhost systemd-logind[760]: Session 48 logged out. Waiting for processes to exit. Dec 5 04:25:40 localhost systemd-logind[760]: Removed session 48. Dec 5 04:25:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5996 DF PROTO=TCP SPT=55764 DPT=9102 SEQ=3624867518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45BE760000000001030307) Dec 5 04:25:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5997 DF PROTO=TCP SPT=55764 DPT=9102 SEQ=3624867518 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45C2850000000001030307) Dec 5 04:25:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49485 DF PROTO=TCP SPT=57160 DPT=9105 SEQ=1565403269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45C4460000000001030307) Dec 5 04:25:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13252 DF PROTO=TCP SPT=32856 DPT=9100 SEQ=376621422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45CA570000000001030307) Dec 5 04:25:45 localhost sshd[146504]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:25:45 localhost systemd-logind[760]: New session 49 of user zuul. Dec 5 04:25:45 localhost systemd[1]: Started Session 49 of User zuul. Dec 5 04:25:46 localhost python3.9[146597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13254 DF PROTO=TCP SPT=32856 DPT=9100 SEQ=376621422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45D6460000000001030307) Dec 5 04:25:47 localhost python3.9[146693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:48 localhost python3.9[146785]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:25:49 localhost python3.9[146875]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:25:50 localhost python3.9[146967]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 5 04:25:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65219 DF PROTO=TCP SPT=34664 DPT=9101 SEQ=4104200904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45E2450000000001030307) Dec 5 04:25:51 localhost python3.9[147059]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:25:52 localhost python3.9[147113]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:25:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56880 DF PROTO=TCP SPT=54910 DPT=9101 SEQ=353152530 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45EE450000000001030307) Dec 5 04:25:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65221 DF PROTO=TCP SPT=34664 DPT=9101 SEQ=4104200904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC45FA050000000001030307) Dec 5 04:25:56 localhost python3.9[147207]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:25:57 localhost python3[147302]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Dec 5 04:25:58 localhost python3.9[147394]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:25:59 localhost python3.9[147486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:25:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13256 DF PROTO=TCP SPT=32856 DPT=9100 SEQ=376621422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4606450000000001030307) Dec 5 04:25:59 localhost python3.9[147534]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:00 localhost python3.9[147626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:00 localhost python3.9[147674]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kz7bkd8o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:01 localhost python3.9[147766]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:01 localhost python3.9[147814]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:02 localhost python3.9[147906]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:03 localhost python3[147999]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 5 04:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32976 DF PROTO=TCP SPT=50672 DPT=9105 SEQ=505783636 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4618850000000001030307) Dec 5 04:26:05 localhost python3.9[148091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7314 DF PROTO=TCP SPT=58734 DPT=9882 SEQ=818446740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC461C460000000001030307) Dec 5 04:26:06 localhost python3.9[148167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926764.555417-431-238427005044003/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:06 localhost python3.9[148259]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:07 localhost python3.9[148334]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926766.456725-476-98280080143858/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:08 localhost python3.9[148426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:08 localhost python3.9[148501]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926767.7480233-521-1305700663720/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:09 localhost python3.9[148593]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:09 localhost python3.9[148668]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926769.0254388-566-16429393091968/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:10 localhost python3.9[148760]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53610 DF PROTO=TCP SPT=51210 DPT=9102 SEQ=1007458467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4633A60000000001030307) Dec 5 04:26:11 localhost python3.9[148835]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764926770.3364773-611-220171942918743/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:12 localhost python3.9[148927]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53611 DF PROTO=TCP SPT=51210 DPT=9102 SEQ=1007458467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4637C60000000001030307) Dec 5 04:26:12 localhost python3.9[149019]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:13 localhost python3.9[149114]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20193 DF PROTO=TCP SPT=57780 DPT=9100 SEQ=780361087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC463F860000000001030307) Dec 5 04:26:14 localhost python3.9[149206]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:15 localhost python3.9[149299]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:26:16 localhost python3.9[149393]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:17 localhost python3.9[149488]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20195 DF PROTO=TCP SPT=57780 DPT=9100 SEQ=780361087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC464B850000000001030307) Dec 5 04:26:18 localhost python3.9[149578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:26:19 localhost python3.9[149671]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005546419.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:12:08:8a:a9" external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:19 localhost ovs-vsctl[149672]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005546419.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:12:08:8a:a9 external_ids:ovn-encap-ip=172.19.0.106 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Dec 5 04:26:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59022 DF PROTO=TCP SPT=39268 DPT=9101 SEQ=2916439837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4657850000000001030307) Dec 5 04:26:20 localhost python3.9[149764]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:21 localhost python3.9[149857]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:26:21 localhost python3.9[149966]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:26:22 localhost python3.9[150095]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:22 localhost python3.9[150154]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:26:23 localhost python3.9[150261]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63283 DF PROTO=TCP SPT=42964 DPT=9882 SEQ=1640987458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4664460000000001030307) Dec 5 04:26:23 localhost python3.9[150309]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:26:25 localhost python3.9[150401]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:25 localhost python3.9[150493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:26 localhost python3.9[150541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59024 DF PROTO=TCP SPT=39268 DPT=9101 SEQ=2916439837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC466F450000000001030307) Dec 5 04:26:27 localhost python3.9[150633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:27 localhost python3.9[150681]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:28 localhost python3.9[150773]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:26:28 localhost systemd[1]: Reloading. Dec 5 04:26:28 localhost systemd-rc-local-generator[150797]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:26:28 localhost systemd-sysv-generator[150803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:26:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:26:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20197 DF PROTO=TCP SPT=57780 DPT=9100 SEQ=780361087 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC467C460000000001030307) Dec 5 04:26:30 localhost python3.9[150902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:31 localhost python3.9[150950]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:31 localhost python3.9[151042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:32 localhost python3.9[151090]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:33 localhost python3.9[151182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:26:33 localhost systemd[1]: Reloading. Dec 5 04:26:33 localhost systemd-sysv-generator[151207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:26:33 localhost systemd-rc-local-generator[151204]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:26:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:26:33 localhost systemd[1]: Starting Create netns directory... Dec 5 04:26:33 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 04:26:33 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:26:33 localhost systemd[1]: Finished Create netns directory. Dec 5 04:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45530 DF PROTO=TCP SPT=34150 DPT=9105 SEQ=1099402855 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC468DC60000000001030307) Dec 5 04:26:34 localhost python3.9[151317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:26:34 localhost python3.9[151409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23499 DF PROTO=TCP SPT=39824 DPT=9882 SEQ=816980471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4692450000000001030307) Dec 5 04:26:36 localhost python3.9[151482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926794.4739435-1343-167159701619129/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:26:37 localhost python3.9[151574]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:26:37 localhost python3.9[151666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:26:38 localhost python3.9[151741]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926797.363911-1418-144534635994096/.source.json _original_basename=.4oj3e8dr follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:38 localhost python3.9[151833]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:41 localhost python3.9[152090]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Dec 5 04:26:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26762 DF PROTO=TCP SPT=40188 DPT=9102 SEQ=1710458675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46A8EA0000000001030307) Dec 5 04:26:42 localhost python3.9[152182]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:26:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26763 DF PROTO=TCP SPT=40188 DPT=9102 SEQ=1710458675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46AD050000000001030307) Dec 5 04:26:42 localhost python3.9[152274]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 04:26:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21918 DF PROTO=TCP SPT=53674 DPT=9100 SEQ=1630440400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46B4B70000000001030307) Dec 5 04:26:47 localhost python3[152393]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:26:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21920 DF PROTO=TCP SPT=53674 DPT=9100 SEQ=1630440400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46C0C50000000001030307) Dec 5 04:26:47 localhost python3[152393]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012 "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:38:47.246477714Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345722821,#012 "VirtualSize": 345722821,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012 "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Dec 5 04:26:47 localhost podman[152442]: 2025-12-05 09:26:47.490881706 +0000 UTC m=+0.085036149 container remove f295b469f636726c71e118ddc12a487c80d6feab05bf010862f629eff63045fd (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12) Dec 5 04:26:47 localhost python3[152393]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Dec 5 04:26:47 localhost podman[152456]: Dec 5 04:26:47 localhost podman[152456]: 2025-12-05 09:26:47.595394494 +0000 UTC m=+0.086544940 container create 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true) Dec 5 04:26:47 localhost podman[152456]: 2025-12-05 09:26:47.553645663 +0000 UTC m=+0.044796139 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 5 04:26:47 localhost python3[152393]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 5 04:26:49 localhost python3.9[152585]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:26:49 localhost python3.9[152679]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:50 localhost python3.9[152725]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:26:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20880 DF PROTO=TCP SPT=35560 DPT=9101 SEQ=2264389101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46CCC50000000001030307) Dec 5 04:26:50 localhost python3.9[152816]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764926810.2915795-1682-263312188676585/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:26:51 localhost python3.9[152862]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:26:51 localhost systemd[1]: Reloading. Dec 5 04:26:51 localhost systemd-rc-local-generator[152885]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:26:51 localhost systemd-sysv-generator[152895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:26:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:26:52 localhost python3.9[152944]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:26:52 localhost systemd[1]: Reloading. Dec 5 04:26:52 localhost systemd-rc-local-generator[152973]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:26:52 localhost systemd-sysv-generator[152979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:26:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:26:52 localhost systemd[1]: Starting ovn_controller container... Dec 5 04:26:52 localhost systemd[1]: Started libcrun container. Dec 5 04:26:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c677d63b74ab86ab60d650ec6b49c9913e9ef3103249873d7aca3658787a3e74/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 5 04:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:26:52 localhost podman[152986]: 2025-12-05 09:26:52.837880699 +0000 UTC m=+0.133224576 container init 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125) Dec 5 04:26:52 localhost ovn_controller[153000]: + sudo -E kolla_set_configs Dec 5 04:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:26:52 localhost podman[152986]: 2025-12-05 09:26:52.880797462 +0000 UTC m=+0.176141299 container start 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 5 04:26:52 localhost edpm-start-podman-container[152986]: ovn_controller Dec 5 04:26:52 localhost systemd[1]: Created slice User Slice of UID 0. Dec 5 04:26:52 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 5 04:26:52 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 5 04:26:52 localhost systemd[1]: Starting User Manager for UID 0... Dec 5 04:26:53 localhost podman[153008]: 2025-12-05 09:26:52.988400049 +0000 UTC m=+0.110697207 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 5 04:26:53 localhost edpm-start-podman-container[152985]: Creating additional drop-in dependency for "ovn_controller" (6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857) Dec 5 04:26:53 localhost podman[153008]: 2025-12-05 09:26:53.072864013 +0000 UTC m=+0.195161211 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 04:26:53 localhost systemd[1]: Reloading. Dec 5 04:26:53 localhost podman[153008]: unhealthy Dec 5 04:26:53 localhost systemd[153027]: Queued start job for default target Main User Target. Dec 5 04:26:53 localhost systemd[153027]: Created slice User Application Slice. Dec 5 04:26:53 localhost systemd[153027]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 5 04:26:53 localhost systemd[153027]: Started Daily Cleanup of User's Temporary Directories. Dec 5 04:26:53 localhost systemd[153027]: Reached target Paths. Dec 5 04:26:53 localhost systemd[153027]: Reached target Timers. Dec 5 04:26:53 localhost systemd[153027]: Starting D-Bus User Message Bus Socket... Dec 5 04:26:53 localhost systemd[153027]: Starting Create User's Volatile Files and Directories... Dec 5 04:26:53 localhost systemd[153027]: Listening on D-Bus User Message Bus Socket. Dec 5 04:26:53 localhost systemd[153027]: Reached target Sockets. Dec 5 04:26:53 localhost systemd[153027]: Finished Create User's Volatile Files and Directories. Dec 5 04:26:53 localhost systemd[153027]: Reached target Basic System. Dec 5 04:26:53 localhost systemd[153027]: Reached target Main User Target. Dec 5 04:26:53 localhost systemd[153027]: Startup finished in 135ms. Dec 5 04:26:53 localhost systemd-rc-local-generator[153084]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:26:53 localhost systemd-sysv-generator[153091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:26:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:26:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65224 DF PROTO=TCP SPT=34664 DPT=9101 SEQ=4104200904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46D8460000000001030307) Dec 5 04:26:53 localhost systemd[1]: Started User Manager for UID 0. Dec 5 04:26:53 localhost systemd[1]: Started ovn_controller container. Dec 5 04:26:53 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:26:53 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Failed with result 'exit-code'. Dec 5 04:26:53 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation. Dec 5 04:26:53 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 04:26:53 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:26:53 localhost systemd[1]: Started Session c12 of User root. Dec 5 04:26:53 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:26:53 localhost ovn_controller[153000]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:26:53 localhost ovn_controller[153000]: INFO:__main__:Validating config file Dec 5 04:26:53 localhost ovn_controller[153000]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:26:53 localhost ovn_controller[153000]: INFO:__main__:Writing out command to execute Dec 5 04:26:53 localhost systemd[1]: session-c12.scope: Deactivated successfully. Dec 5 04:26:53 localhost ovn_controller[153000]: ++ cat /run_command Dec 5 04:26:53 localhost ovn_controller[153000]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 5 04:26:53 localhost ovn_controller[153000]: + ARGS= Dec 5 04:26:53 localhost ovn_controller[153000]: + sudo kolla_copy_cacerts Dec 5 04:26:53 localhost systemd[1]: Started Session c13 of User root. Dec 5 04:26:53 localhost systemd[1]: session-c13.scope: Deactivated successfully. Dec 5 04:26:53 localhost ovn_controller[153000]: + [[ ! -n '' ]] Dec 5 04:26:53 localhost ovn_controller[153000]: + . kolla_extend_start Dec 5 04:26:53 localhost ovn_controller[153000]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Dec 5 04:26:53 localhost ovn_controller[153000]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 5 04:26:53 localhost ovn_controller[153000]: + umask 0022 Dec 5 04:26:53 localhost ovn_controller[153000]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00004|main|INFO|OVS IDL reconnected, force recompute. Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00013|main|INFO|OVS feature set changed, force recompute. Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00021|main|INFO|OVS feature set changed, force recompute. Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00022|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00023|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00024|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00025|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00026|binding|INFO|Claiming lport c2f95d81-2317-46b9-8146-596eac8f9acb for this chassis. Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00027|binding|INFO|c2f95d81-2317-46b9-8146-596eac8f9acb: Claiming fa:16:3e:04:e6:3a 192.168.0.214 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00028|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00029|binding|INFO|Removing lport c2f95d81-2317-46b9-8146-596eac8f9acb ovn-installed in OVS Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00031|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00032|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0 Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00033|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00034|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00035|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00036|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:53 localhost ovn_controller[153000]: 2025-12-05T09:26:53Z|00037|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:54 localhost ovn_controller[153000]: 2025-12-05T09:26:54Z|00038|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:54 localhost ovn_controller[153000]: 2025-12-05T09:26:54Z|00039|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:55 localhost python3.9[153203]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:55 localhost ovs-vsctl[153204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Dec 5 04:26:55 localhost ovn_controller[153000]: 2025-12-05T09:26:55Z|00040|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:55 localhost ovn_controller[153000]: 2025-12-05T09:26:55Z|00041|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:26:55 localhost python3.9[153296]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:55 localhost ovs-vsctl[153298]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Dec 5 04:26:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26766 DF PROTO=TCP SPT=40188 DPT=9102 SEQ=1710458675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46E4460000000001030307) Dec 5 04:26:57 localhost python3.9[153391]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:26:57 localhost ovs-vsctl[153392]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Dec 5 04:26:58 localhost systemd-logind[760]: Session 49 logged out. Waiting for processes to exit. Dec 5 04:26:58 localhost systemd[1]: session-49.scope: Deactivated successfully. Dec 5 04:26:58 localhost systemd[1]: session-49.scope: Consumed 40.177s CPU time. Dec 5 04:26:58 localhost systemd-logind[760]: Removed session 49. Dec 5 04:26:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21922 DF PROTO=TCP SPT=53674 DPT=9100 SEQ=1630440400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC46F0460000000001030307) Dec 5 04:27:01 localhost ovn_controller[153000]: 2025-12-05T09:27:01Z|00042|binding|INFO|Setting lport c2f95d81-2317-46b9-8146-596eac8f9acb ovn-installed in OVS Dec 5 04:27:01 localhost ovn_controller[153000]: 2025-12-05T09:27:01Z|00043|binding|INFO|Setting lport c2f95d81-2317-46b9-8146-596eac8f9acb up in Southbound Dec 5 04:27:03 localhost sshd[153407]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:27:03 localhost systemd-logind[760]: New session 51 of user zuul. Dec 5 04:27:03 localhost systemd[1]: Started Session 51 of User zuul. Dec 5 04:27:03 localhost systemd[1]: Stopping User Manager for UID 0... Dec 5 04:27:03 localhost systemd[153027]: Activating special unit Exit the Session... Dec 5 04:27:03 localhost systemd[153027]: Stopped target Main User Target. Dec 5 04:27:03 localhost systemd[153027]: Stopped target Basic System. Dec 5 04:27:03 localhost systemd[153027]: Stopped target Paths. Dec 5 04:27:03 localhost systemd[153027]: Stopped target Sockets. Dec 5 04:27:03 localhost systemd[153027]: Stopped target Timers. Dec 5 04:27:03 localhost systemd[153027]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 04:27:03 localhost systemd[153027]: Closed D-Bus User Message Bus Socket. Dec 5 04:27:03 localhost systemd[153027]: Stopped Create User's Volatile Files and Directories. Dec 5 04:27:03 localhost systemd[153027]: Removed slice User Application Slice. Dec 5 04:27:03 localhost systemd[153027]: Reached target Shutdown. Dec 5 04:27:03 localhost systemd[153027]: Finished Exit the Session. Dec 5 04:27:03 localhost systemd[153027]: Reached target Exit the Session. Dec 5 04:27:03 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 5 04:27:03 localhost systemd[1]: Stopped User Manager for UID 0. Dec 5 04:27:03 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 5 04:27:03 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 5 04:27:03 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 5 04:27:03 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 5 04:27:03 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 5 04:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=197 DF PROTO=TCP SPT=50758 DPT=9105 SEQ=1388678147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4703060000000001030307) Dec 5 04:27:04 localhost python3.9[153502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:27:05 localhost python3.9[153598]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:06 localhost python3.9[153690]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:07 localhost python3.9[153782]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:08 localhost python3.9[153874]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:08 localhost python3.9[153966]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:09 localhost python3.9[154056]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:27:10 localhost python3.9[154148]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 5 04:27:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32304 DF PROTO=TCP SPT=46128 DPT=9102 SEQ=1727409604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC471E060000000001030307) Dec 5 04:27:11 localhost python3.9[154238]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:11 localhost python3.9[154312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926830.753649-218-119138749343843/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32305 DF PROTO=TCP SPT=46128 DPT=9102 SEQ=1727409604 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4722050000000001030307) Dec 5 04:27:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=198 DF PROTO=TCP SPT=50758 DPT=9105 SEQ=1388678147 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4724450000000001030307) Dec 5 04:27:13 localhost python3.9[154402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:14 localhost python3.9[154475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926833.097452-263-116274982002279/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29350 DF PROTO=TCP SPT=34608 DPT=9100 SEQ=1572312803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4729E70000000001030307) Dec 5 04:27:15 localhost python3.9[154567]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:27:16 localhost python3.9[154621]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:27:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29352 DF PROTO=TCP SPT=34608 DPT=9100 SEQ=1572312803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4736050000000001030307) Dec 5 04:27:17 localhost ovn_controller[153000]: 2025-12-05T09:27:17Z|00044|memory|INFO|18760 kB peak resident set size after 24.1 seconds Dec 5 04:27:17 localhost ovn_controller[153000]: 2025-12-05T09:27:17Z|00045|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:1045 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:76 lflow-cache-entries-cache-matches:195 lflow-cache-size-KB:289 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:154 ofctrl_installed_flow_usage-KB:111 ofctrl_sb_flow_ref_usage-KB:67 Dec 5 04:27:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52284 DF PROTO=TCP SPT=56440 DPT=9101 SEQ=1682976214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4741C60000000001030307) Dec 5 04:27:20 localhost python3.9[154715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:27:22 localhost python3.9[154808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:22 localhost python3.9[154879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926841.8091605-374-39923151383266/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:23 localhost python3.9[154982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59027 DF PROTO=TCP SPT=39268 DPT=9101 SEQ=2916439837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC474E450000000001030307) Dec 5 04:27:23 localhost python3.9[155070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926842.866489-374-255939194655989/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:27:23 localhost podman[155128]: 2025-12-05 09:27:23.893485289 +0000 UTC m=+0.074201302 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:27:23 localhost podman[155128]: 2025-12-05 09:27:23.971857808 +0000 UTC m=+0.152573871 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 5 04:27:23 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:27:24 localhost podman[155182]: 2025-12-05 09:27:24.155611152 +0000 UTC m=+0.095680889 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True) Dec 5 04:27:24 localhost podman[155182]: 2025-12-05 09:27:24.26271579 +0000 UTC m=+0.202785527 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, version=7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z) Dec 5 04:27:24 localhost python3.9[155352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:25 localhost python3.9[155443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926844.4828677-506-119273308189840/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:25 localhost python3.9[155556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:26 localhost python3.9[155633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926845.4639409-506-1784317562311/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52286 DF PROTO=TCP SPT=56440 DPT=9101 SEQ=1682976214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4759850000000001030307) Dec 5 04:27:27 localhost python3.9[155723]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:27:27 localhost python3.9[155817]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:28 localhost python3.9[155909]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:29 localhost python3.9[155957]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:29 localhost python3.9[156049]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29354 DF PROTO=TCP SPT=34608 DPT=9100 SEQ=1572312803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4766460000000001030307) Dec 5 04:27:30 localhost python3.9[156097]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:31 localhost ovn_controller[153000]: 2025-12-05T09:27:31Z|00046|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Dec 5 04:27:32 localhost python3.9[156189]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:33 localhost python3.9[156281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:34 localhost python3.9[156329]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28977 DF PROTO=TCP SPT=46808 DPT=9105 SEQ=4100050080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4778050000000001030307) Dec 5 04:27:34 localhost python3.9[156421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:35 localhost python3.9[156469]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11903 DF PROTO=TCP SPT=50856 DPT=9882 SEQ=2270017955 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC477C450000000001030307) Dec 5 04:27:36 localhost python3.9[156561]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:27:36 localhost systemd[1]: Reloading. Dec 5 04:27:36 localhost systemd-rc-local-generator[156586]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:27:36 localhost systemd-sysv-generator[156590]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:27:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:27:37 localhost python3.9[156691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:37 localhost python3.9[156739]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:38 localhost python3.9[156831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:38 localhost python3.9[156879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:39 localhost python3.9[156971]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:27:39 localhost systemd[1]: Reloading. Dec 5 04:27:39 localhost systemd-rc-local-generator[156995]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:27:39 localhost systemd-sysv-generator[157001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:27:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:27:39 localhost systemd[1]: Starting Create netns directory... Dec 5 04:27:39 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 04:27:39 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:27:39 localhost systemd[1]: Finished Create netns directory. Dec 5 04:27:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19674 DF PROTO=TCP SPT=32840 DPT=9102 SEQ=2884322088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4793360000000001030307) Dec 5 04:27:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19675 DF PROTO=TCP SPT=32840 DPT=9102 SEQ=2884322088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4797450000000001030307) Dec 5 04:27:42 localhost python3.9[157106]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:43 localhost python3.9[157198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19642 DF PROTO=TCP SPT=51046 DPT=9100 SEQ=3722036336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC479F170000000001030307) Dec 5 04:27:44 localhost python3.9[157271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764926862.9221456-959-20074338444882/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:45 localhost python3.9[157363]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:27:46 localhost python3.9[157455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:27:47 localhost python3.9[157530]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764926866.2111545-1034-172206344774193/.source.json _original_basename=.7zaucez8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19644 DF PROTO=TCP SPT=51046 DPT=9100 SEQ=3722036336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC47AB050000000001030307) Dec 5 04:27:47 localhost python3.9[157622]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:50 localhost python3.9[157879]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Dec 5 04:27:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19580 DF PROTO=TCP SPT=57666 DPT=9101 SEQ=1071223569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC47B7060000000001030307) Dec 5 04:27:51 localhost python3.9[157971]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:27:52 localhost python3.9[158063]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 04:27:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20885 DF PROTO=TCP SPT=35560 DPT=9101 SEQ=2264389101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC47C2460000000001030307) Dec 5 04:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:27:54 localhost systemd[1]: tmp-crun.ubWLfi.mount: Deactivated successfully. Dec 5 04:27:54 localhost podman[158105]: 2025-12-05 09:27:54.208552265 +0000 UTC m=+0.092951710 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 04:27:54 localhost podman[158105]: 2025-12-05 09:27:54.293684999 +0000 UTC m=+0.178084474 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 04:27:54 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:27:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19582 DF PROTO=TCP SPT=57666 DPT=9101 SEQ=1071223569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC47CEC50000000001030307) Dec 5 04:27:56 localhost python3[158207]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:27:57 localhost python3[158207]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",#012 "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:29:20.327314945Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784141054,#012 "VirtualSize": 784141054,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",#012 "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",#012 "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf Dec 5 04:27:57 localhost podman[158257]: 2025-12-05 09:27:57.16424 +0000 UTC m=+0.094120645 container remove 0a8784d7ead485e7868b230c1ba307d2ce1fec3ca878ffbafde7cfc6af884f9e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'a79d47e79d9c2e42edb251b1a5fb6c64'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 5 04:27:57 localhost python3[158207]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Dec 5 04:27:57 localhost podman[158271]: Dec 5 04:27:57 localhost podman[158271]: 2025-12-05 09:27:57.279745793 +0000 UTC m=+0.094697192 container create 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:27:57 localhost podman[158271]: 2025-12-05 09:27:57.236741818 +0000 UTC m=+0.051693247 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 04:27:57 localhost python3[158207]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 04:27:58 localhost python3.9[158401]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:27:59 localhost python3.9[158495]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:27:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19646 DF PROTO=TCP SPT=51046 DPT=9100 SEQ=3722036336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC47DA450000000001030307) Dec 5 04:27:59 localhost python3.9[158541]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:28:00 localhost python3.9[158632]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764926879.5985265-1298-114567327189843/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:00 localhost python3.9[158678]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:28:00 localhost systemd[1]: Reloading. Dec 5 04:28:00 localhost systemd-rc-local-generator[158704]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:28:00 localhost systemd-sysv-generator[158707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:28:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:01 localhost python3.9[158759]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:01 localhost systemd[1]: Reloading. Dec 5 04:28:01 localhost systemd-rc-local-generator[158788]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:28:01 localhost systemd-sysv-generator[158792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:28:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:01 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 5 04:28:02 localhost systemd[1]: Started libcrun container. Dec 5 04:28:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0e41b70143e443b07a9ffcdb99e037176e9a9e60f4e2f81d8310e36cf1167e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 5 04:28:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0e41b70143e443b07a9ffcdb99e037176e9a9e60f4e2f81d8310e36cf1167e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 04:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:28:02 localhost podman[158801]: 2025-12-05 09:28:02.127453127 +0000 UTC m=+0.152408028 container init 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + sudo -E kolla_set_configs Dec 5 04:28:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:28:02 localhost podman[158801]: 2025-12-05 09:28:02.182105811 +0000 UTC m=+0.207060652 container start 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:28:02 localhost edpm-start-podman-container[158801]: ovn_metadata_agent Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Validating config file Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Copying service configuration files Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Writing out command to execute Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.conf Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: ++ cat /run_command Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + CMD=neutron-ovn-metadata-agent Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + ARGS= Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + sudo kolla_copy_cacerts Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + [[ ! -n '' ]] Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + . kolla_extend_start Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: Running command: 'neutron-ovn-metadata-agent' Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + umask 0022 Dec 5 04:28:02 localhost ovn_metadata_agent[158815]: + exec neutron-ovn-metadata-agent Dec 5 04:28:02 localhost podman[158824]: 2025-12-05 09:28:02.279844033 +0000 UTC m=+0.091355053 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:28:02 localhost podman[158824]: 2025-12-05 09:28:02.364181344 +0000 UTC m=+0.175692344 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 5 04:28:02 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:28:02 localhost edpm-start-podman-container[158800]: Creating additional drop-in dependency for "ovn_metadata_agent" (1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465) Dec 5 04:28:02 localhost systemd[1]: Reloading. Dec 5 04:28:02 localhost systemd-sysv-generator[158894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:28:02 localhost systemd-rc-local-generator[158891]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:28:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:02 localhost systemd[1]: tmp-crun.EAJ0yL.mount: Deactivated successfully. Dec 5 04:28:02 localhost systemd[1]: Started ovn_metadata_agent container. Dec 5 04:28:03 localhost systemd[1]: session-51.scope: Deactivated successfully. Dec 5 04:28:03 localhost systemd[1]: session-51.scope: Consumed 31.759s CPU time. Dec 5 04:28:03 localhost systemd-logind[760]: Session 51 logged out. Waiting for processes to exit. Dec 5 04:28:03 localhost systemd-logind[760]: Removed session 51. Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.816 158820 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.816 158820 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.816 158820 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.818 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.818 158820 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.818 158820 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.818 158820 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.818 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.819 158820 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.819 158820 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.819 158820 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.819 158820 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.819 158820 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.820 158820 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.820 158820 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.820 158820 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.820 158820 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.820 158820 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.820 158820 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.821 158820 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.821 158820 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.821 158820 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.821 158820 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.821 158820 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.822 158820 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.822 158820 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.822 158820 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.822 158820 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.822 158820 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.822 158820 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.823 158820 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.823 158820 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.823 158820 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.823 158820 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.823 158820 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.823 158820 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.824 158820 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.824 158820 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.824 158820 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.824 158820 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.824 158820 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.825 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.826 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.826 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.826 158820 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.826 158820 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.826 158820 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.826 158820 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.827 158820 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.827 158820 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.827 158820 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.827 158820 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.827 158820 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.828 158820 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.828 158820 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.828 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.828 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.828 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.829 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.829 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.829 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.829 158820 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.829 158820 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.829 158820 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.830 158820 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.830 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.830 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.830 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.830 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.830 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.831 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.831 158820 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.831 158820 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.831 158820 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.831 158820 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.831 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.832 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.832 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.832 158820 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.832 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.832 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.833 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.833 158820 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.833 158820 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.833 158820 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.833 158820 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.833 158820 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.834 158820 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.834 158820 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.834 158820 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.834 158820 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.834 158820 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.834 158820 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.835 158820 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.836 158820 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.836 158820 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.836 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.836 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.836 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.837 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.837 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.837 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.837 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.837 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.837 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.838 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.838 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.838 158820 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.838 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.838 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.839 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.839 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.839 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.839 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.839 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.839 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.840 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.840 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.840 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.840 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.840 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.840 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.841 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.841 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.841 158820 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.841 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.841 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.842 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.842 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.842 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.842 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.842 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.842 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.843 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.843 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.843 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.843 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.843 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.844 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.844 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.844 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.844 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.844 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.845 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.845 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.845 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.845 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.845 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.846 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.846 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.846 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.846 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.846 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.846 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.847 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.847 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.847 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.847 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.847 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.848 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.848 158820 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.848 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.848 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.848 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.848 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.849 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.849 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.849 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.849 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.849 158820 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.849 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.850 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.850 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.850 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.850 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.850 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.851 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.851 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.851 158820 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.851 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.851 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.851 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.852 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.852 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.852 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.852 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.852 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.852 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.853 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.853 158820 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.853 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.853 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.853 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.853 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.854 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.854 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.854 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.854 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.854 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.855 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.855 158820 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.855 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.855 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.855 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.855 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.856 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.856 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.856 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.856 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.856 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.856 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.857 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.857 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.857 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.857 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.857 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.857 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.858 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.859 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.859 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.859 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.859 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.859 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.859 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.860 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.860 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.860 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.860 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.860 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.861 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.861 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.861 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.861 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.861 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.861 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.862 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.862 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.862 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.862 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.862 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.863 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.863 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.863 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.863 158820 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.863 158820 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.864 158820 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.864 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.864 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.864 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.864 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.864 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.865 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.865 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.865 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.865 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.865 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.866 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.866 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.866 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.866 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.866 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.867 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.867 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.867 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.867 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.867 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.868 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.868 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.868 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.868 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.868 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.868 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.869 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.869 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.869 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.869 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.869 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.870 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.870 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.870 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.870 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.870 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.871 158820 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.871 158820 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.912 158820 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.913 158820 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.913 158820 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.913 158820 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.913 158820 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.929 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 22ecc443-b9ab-4c88-a730-5598bd07d403 (UUID: 22ecc443-b9ab-4c88-a730-5598bd07d403) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.948 158820 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.948 158820 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.949 158820 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.949 158820 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.951 158820 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.957 158820 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.964 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:e6:3a 192.168.0.214'], port_security=['fa:16:3e:04:e6:3a 192.168.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.214/24', 'neutron:device_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005546419.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1af67ae0-d372-40b9-b93c-60c041b7465b 80b85888-ac92-454c-bb81-84292c7ac789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=961a9491-8d79-4baf-950b-57a666c30c22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=c2f95d81-2317-46b9-8146-596eac8f9acb) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.965 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '22ecc443-b9ab-4c88-a730-5598bd07d403'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '0a0f5bc9-989f-5083-861a-a56d04c0036b', 'neutron:ovn-metadata-sb-cfg': '1'}, name=22ecc443-b9ab-4c88-a730-5598bd07d403, nb_cfg_timestamp=1764926821652, nb_cfg=3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.966 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c2f95d81-2317-46b9-8146-596eac8f9acb in datapath 86f5c13f-3cf8-4808-86c3-060f6b38ab5b bound to our chassis on insert#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.967 158820 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.967 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.967 158820 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.968 158820 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.968 158820 INFO oslo_service.service [-] Starting 1 workers#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.970 158820 DEBUG oslo_service.service [-] Started child 158921 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.973 158820 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86f5c13f-3cf8-4808-86c3-060f6b38ab5b#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.974 158820 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpy3e3qjr3/privsep.sock']#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.975 158921 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-961693'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.994 158921 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.995 158921 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.995 158921 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.997 158921 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 5 04:28:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:03.999 158921 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.011 158921 INFO eventlet.wsgi.server [-] (158921) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Dec 5 04:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58576 DF PROTO=TCP SPT=32864 DPT=9105 SEQ=1847266939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC47ED460000000001030307) Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.623 158820 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.624 158820 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy3e3qjr3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.495 158926 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.502 158926 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.505 158926 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.506 158926 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158926#033[00m Dec 5 04:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:04.629 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[20404baa-7895-425a-8b35-fd5d5ea3bd98]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:05 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.038 158926 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:28:05 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.038 158926 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:28:05 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.038 158926 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:28:05 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.489 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[25db94ae-262d-41c1-b9cc-6300f5d5f89a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:05 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.491 158820 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp4e8dycm7/privsep.sock']#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:06.076 158820 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:06.077 158820 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4e8dycm7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.973 158937 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.976 158937 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.979 158937 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:05.979 158937 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158937#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:06.080 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e8edd8-c56f-4d04-b81a-406d144813cc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:06.502 158937 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:06.502 158937 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:28:06 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:06.502 158937 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.007 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[5e21ea20-0bea-4c63-a24a-efb1ed23c3f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.011 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[a689271c-6adb-4cf5-9435-8978142971ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.026 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8819e3-dc84-4f2f-a4e2-494fa7379697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.043 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccf39e6-af18-4e42-b77e-034e06557c39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f5c13f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0c:1c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 104, 'tx_packets': 68, 'rx_bytes': 8926, 'tx_bytes': 7143, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 104, 'tx_packets': 68, 'rx_bytes': 8926, 'tx_bytes': 7143, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685362, 'reachable_time': 29640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 17, 'outoctets': 1164, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 17, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 1164, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 17, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 158947, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.059 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf9e752-ed90-4a51-b5e3-e146acc6738e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap86f5c13f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685372, 'tstamp': 685372}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 158948, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap86f5c13f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685374, 'tstamp': 685374}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 158948, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::a9fe:a9fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685369, 'tstamp': 685369}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 158948, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:1cd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685362, 'tstamp': 685362}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 158948, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.111 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[bddf83cf-8814-4344-9d16-3b6b59b3a08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.113 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f5c13f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.116 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86f5c13f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.117 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.117 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86f5c13f-30, col_values=(('external_ids', {'iface-id': '6eec4798-2413-4eda-86b7-a390f3150ec8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.118 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.122 158820 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgsh5wrgu/privsep.sock']#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.716 158820 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.717 158820 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgsh5wrgu/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.607 158957 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.613 158957 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.617 158957 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.617 158957 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158957#033[00m Dec 5 04:28:07 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:07.720 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[36425125-696a-4c3c-bad8-5ee9e1fe6eda]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.176 158957 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.176 158957 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.177 158957 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.710 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[6d88a4e4-8a03-4f23-afba-6116b15182e1]: (4, ['ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b']) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.714 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, column=external_ids, values=({'neutron:ovn-metadata-id': '0a0f5bc9-989f-5083-861a-a56d04c0036b'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.714 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:28:08 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:08.715 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.312 158820 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.312 158820 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.312 158820 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.312 158820 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.313 158820 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.313 158820 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.313 158820 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.313 158820 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.314 158820 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.314 158820 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.314 158820 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.314 158820 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.315 158820 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.315 158820 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.315 158820 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.316 158820 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.316 158820 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.316 158820 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.316 158820 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.316 158820 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.317 158820 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.317 158820 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.317 158820 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.317 158820 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.318 158820 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.318 158820 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.318 158820 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.319 158820 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.319 158820 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.320 158820 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.320 158820 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.320 158820 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.320 158820 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.320 158820 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.321 158820 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.321 158820 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.321 158820 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.321 158820 DEBUG oslo_service.service [-] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.322 158820 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.322 158820 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.322 158820 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.322 158820 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.323 158820 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.323 158820 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.323 158820 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.323 158820 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.324 158820 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.324 158820 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.324 158820 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.324 158820 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.325 158820 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.325 158820 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.325 158820 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.325 158820 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.325 158820 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.326 158820 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.326 158820 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.326 158820 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.326 158820 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.326 158820 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.327 158820 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.327 158820 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.327 158820 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.327 158820 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.328 158820 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.328 158820 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.328 158820 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.328 158820 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.328 158820 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.329 158820 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.329 158820 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.329 158820 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.329 158820 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.330 158820 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.330 158820 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.330 158820 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.330 158820 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.330 158820 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.331 158820 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.331 158820 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.331 158820 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.331 158820 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.332 158820 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.332 158820 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.332 158820 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.332 158820 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.332 158820 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.333 158820 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.333 158820 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.333 158820 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.333 158820 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.334 158820 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.334 158820 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.334 158820 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.334 158820 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.334 158820 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.335 158820 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.335 158820 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.335 158820 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.335 158820 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.335 158820 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.336 158820 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.336 158820 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.336 158820 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.336 158820 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.336 158820 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.337 158820 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.337 158820 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.337 158820 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.338 158820 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.338 158820 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.339 158820 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.339 158820 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.339 158820 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.339 158820 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.340 158820 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.340 158820 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.340 158820 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.340 158820 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.341 158820 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.341 158820 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.341 158820 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.341 158820 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.342 158820 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.342 158820 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.342 158820 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.342 158820 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.342 158820 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.343 158820 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.343 158820 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.343 158820 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.343 158820 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.344 158820 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.344 158820 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.344 158820 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.344 158820 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.345 158820 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.345 158820 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.345 158820 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.345 158820 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.345 158820 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.346 158820 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.346 158820 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.346 158820 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.346 158820 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.348 158820 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.349 158820 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.349 158820 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.350 158820 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.350 158820 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.350 158820 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.350 158820 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.351 158820 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.351 158820 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.351 158820 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.351 158820 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.352 158820 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.352 158820 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.352 158820 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.352 158820 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.352 158820 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.353 158820 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.353 158820 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.353 158820 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.353 158820 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.354 158820 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.354 158820 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.354 158820 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.354 158820 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.355 158820 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.355 158820 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.355 158820 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.355 158820 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.355 158820 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.356 158820 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.356 158820 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.356 158820 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.356 158820 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.357 158820 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.357 158820 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.357 158820 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.357 158820 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.357 158820 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.358 158820 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.358 158820 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.358 158820 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.358 158820 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.359 158820 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.359 158820 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.359 158820 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.359 158820 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.360 158820 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.360 158820 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.360 158820 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.360 158820 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.360 158820 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.361 158820 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.361 158820 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.361 158820 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.361 158820 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.362 158820 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.362 158820 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.362 158820 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.362 158820 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.362 158820 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.363 158820 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.363 158820 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.363 158820 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.363 158820 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.363 158820 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.364 158820 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.364 158820 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.364 158820 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.364 158820 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.365 158820 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.365 158820 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.365 158820 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.365 158820 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.365 158820 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.366 158820 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.366 158820 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.366 158820 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.366 158820 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.366 158820 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.367 158820 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.367 158820 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.367 158820 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.367 158820 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.368 158820 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.368 158820 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.368 158820 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.368 158820 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.369 158820 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.369 158820 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.369 158820 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.370 158820 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.370 158820 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.370 158820 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.371 158820 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.371 158820 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.371 158820 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.372 158820 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.372 158820 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.372 158820 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.372 158820 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.372 158820 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.373 158820 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.373 158820 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.373 158820 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.373 158820 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.374 158820 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.374 158820 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.374 158820 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.374 158820 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.374 158820 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.375 158820 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.375 158820 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.375 158820 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.375 158820 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.376 158820 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.376 158820 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.376 158820 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.376 158820 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.376 158820 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.377 158820 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.377 158820 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.377 158820 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.377 158820 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.378 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.378 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.378 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.378 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.379 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.379 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.379 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.379 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.379 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.380 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.380 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.380 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.380 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.381 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.381 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.381 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.381 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.382 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.382 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.382 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.382 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.382 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.383 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.383 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.383 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.383 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.384 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.384 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.439 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.440 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.440 158820 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.440 158820 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.440 158820 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.441 158820 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.441 158820 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:28:09 localhost ovn_metadata_agent[158815]: 2025-12-05 09:28:09.441 158820 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 5 04:28:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53610 DF PROTO=TCP SPT=60156 DPT=9102 SEQ=27875839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4808670000000001030307) Dec 5 04:28:12 localhost sshd[158962]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:28:12 localhost systemd-logind[760]: New session 52 of user zuul. Dec 5 04:28:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53611 DF PROTO=TCP SPT=60156 DPT=9102 SEQ=27875839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC480C850000000001030307) Dec 5 04:28:12 localhost systemd[1]: Started Session 52 of User zuul. Dec 5 04:28:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58577 DF PROTO=TCP SPT=32864 DPT=9105 SEQ=1847266939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC480E450000000001030307) Dec 5 04:28:13 localhost python3.9[159055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:28:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29930 DF PROTO=TCP SPT=56438 DPT=9100 SEQ=3316998205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4814460000000001030307) Dec 5 04:28:14 localhost python3.9[159151]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:15 localhost python3.9[159256]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:15 localhost systemd[1]: libpod-1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f.scope: Deactivated successfully. Dec 5 04:28:15 localhost podman[159257]: 2025-12-05 09:28:15.643615232 +0000 UTC m=+0.078835708 container died 1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git) Dec 5 04:28:15 localhost systemd[1]: tmp-crun.RBA0ZY.mount: Deactivated successfully. Dec 5 04:28:15 localhost systemd[1]: tmp-crun.9OXRlp.mount: Deactivated successfully. Dec 5 04:28:15 localhost podman[159257]: 2025-12-05 09:28:15.682538486 +0000 UTC m=+0.117758902 container cleanup 1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 5 04:28:15 localhost podman[159270]: 2025-12-05 09:28:15.725920023 +0000 UTC m=+0.069359505 container remove 1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1) Dec 5 04:28:15 localhost systemd[1]: libpod-conmon-1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f.scope: Deactivated successfully. Dec 5 04:28:16 localhost systemd[1]: var-lib-containers-storage-overlay-1470ee8f543baf54d5cf653d97a785841a6023aa58707b8ab81be849108da325-merged.mount: Deactivated successfully. Dec 5 04:28:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a596f8c49019553f32c27917d42be1ea03fccfe5f5987301eefb7fd673fef8f-userdata-shm.mount: Deactivated successfully. Dec 5 04:28:16 localhost python3.9[159377]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:28:17 localhost systemd[1]: Reloading. Dec 5 04:28:17 localhost systemd-rc-local-generator[159400]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:28:17 localhost systemd-sysv-generator[159407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:28:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29932 DF PROTO=TCP SPT=56438 DPT=9100 SEQ=3316998205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4820460000000001030307) Dec 5 04:28:18 localhost python3.9[159503]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:28:18 localhost network[159520]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:28:18 localhost network[159521]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:28:18 localhost network[159522]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30156 DF PROTO=TCP SPT=42442 DPT=9101 SEQ=2770703816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC482C460000000001030307) Dec 5 04:28:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52289 DF PROTO=TCP SPT=56440 DPT=9101 SEQ=1682976214 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4838460000000001030307) Dec 5 04:28:24 localhost python3.9[159723]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:24 localhost systemd[1]: Reloading. Dec 5 04:28:24 localhost systemd-rc-local-generator[159752]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:28:24 localhost systemd-sysv-generator[159755]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:28:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:28:24 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Dec 5 04:28:24 localhost podman[159762]: 2025-12-05 09:28:24.49718687 +0000 UTC m=+0.096194367 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 04:28:24 localhost podman[159762]: 2025-12-05 09:28:24.569732089 +0000 UTC m=+0.168739596 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:28:24 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:28:25 localhost python3.9[159881]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:25 localhost python3.9[159974]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30158 DF PROTO=TCP SPT=42442 DPT=9101 SEQ=2770703816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4844060000000001030307) Dec 5 04:28:26 localhost python3.9[160109]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:27 localhost python3.9[160221]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:28 localhost python3.9[160329]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:28 localhost python3.9[160422]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:28:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29934 DF PROTO=TCP SPT=56438 DPT=9100 SEQ=3316998205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4850450000000001030307) Dec 5 04:28:30 localhost python3.9[160515]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:31 localhost python3.9[160607]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:32 localhost python3.9[160699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:28:32 localhost systemd[1]: tmp-crun.PWpgQt.mount: Deactivated successfully. Dec 5 04:28:32 localhost podman[160792]: 2025-12-05 09:28:32.948452295 +0000 UTC m=+0.092058593 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 04:28:32 localhost podman[160792]: 2025-12-05 09:28:32.97852929 +0000 UTC m=+0.122135568 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Dec 5 04:28:32 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:28:33 localhost python3.9[160791]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:34 localhost python3.9[160901]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43347 DF PROTO=TCP SPT=56836 DPT=9105 SEQ=405353550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48628A0000000001030307) Dec 5 04:28:34 localhost python3.9[160993]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:35 localhost python3.9[161085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3710 DF PROTO=TCP SPT=47132 DPT=9882 SEQ=1371336188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4866450000000001030307) Dec 5 04:28:36 localhost python3.9[161177]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:36 localhost python3.9[161269]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:37 localhost python3.9[161361]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:37 localhost python3.9[161453]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:38 localhost python3.9[161545]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:39 localhost python3.9[161637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:39 localhost python3.9[161729]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:28:40 localhost python3.9[161821]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50973 DF PROTO=TCP SPT=45670 DPT=9102 SEQ=844715248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC487D960000000001030307) Dec 5 04:28:41 localhost python3.9[161913]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:28:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50974 DF PROTO=TCP SPT=45670 DPT=9102 SEQ=844715248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4881850000000001030307) Dec 5 04:28:42 localhost python3.9[162005]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:28:42 localhost systemd[1]: Reloading. Dec 5 04:28:42 localhost systemd-rc-local-generator[162029]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:28:42 localhost systemd-sysv-generator[162034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:28:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:28:43 localhost python3.9[162132]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64718 DF PROTO=TCP SPT=59232 DPT=9100 SEQ=1780174426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4889760000000001030307) Dec 5 04:28:44 localhost python3.9[162225]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:44 localhost python3.9[162318]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:45 localhost python3.9[162411]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:46 localhost python3.9[162504]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:46 localhost python3.9[162597]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64720 DF PROTO=TCP SPT=59232 DPT=9100 SEQ=1780174426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4895850000000001030307) Dec 5 04:28:47 localhost python3.9[162690]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:28:50 localhost python3.9[162783]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Dec 5 04:28:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27182 DF PROTO=TCP SPT=35024 DPT=9101 SEQ=1605471281 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48A1860000000001030307) Dec 5 04:28:51 localhost python3.9[162876]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 5 04:28:53 localhost python3.9[162974]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546419.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 5 04:28:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10337 DF PROTO=TCP SPT=40938 DPT=9882 SEQ=1552361085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48AE460000000001030307) Dec 5 04:28:54 localhost python3.9[163074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:28:55 localhost podman[163110]: 2025-12-05 09:28:55.203008034 +0000 UTC m=+0.086953576 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 5 04:28:55 localhost podman[163110]: 2025-12-05 09:28:55.244752328 +0000 UTC m=+0.128697860 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 04:28:55 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:28:55 localhost python3.9[163142]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:28:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27184 DF PROTO=TCP SPT=35024 DPT=9101 SEQ=1605471281 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48B9450000000001030307) Dec 5 04:28:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64722 DF PROTO=TCP SPT=59232 DPT=9100 SEQ=1780174426 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48C6450000000001030307) Dec 5 04:29:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:29:03 localhost systemd[1]: tmp-crun.FSmn5t.mount: Deactivated successfully. Dec 5 04:29:03 localhost podman[163222]: 2025-12-05 09:29:03.207578115 +0000 UTC m=+0.090898137 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:29:03 localhost podman[163222]: 2025-12-05 09:29:03.218420519 +0000 UTC m=+0.101740541 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 04:29:03 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:29:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:29:03.873 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:29:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:29:03.874 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:29:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:29:03.876 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62651 DF PROTO=TCP SPT=35746 DPT=9105 SEQ=1941891938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48D7C50000000001030307) Dec 5 04:29:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57438 DF PROTO=TCP SPT=52416 DPT=9882 SEQ=3211098219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48DC460000000001030307) Dec 5 04:29:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29891 DF PROTO=TCP SPT=45988 DPT=9102 SEQ=4047765736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48F2C70000000001030307) Dec 5 04:29:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29892 DF PROTO=TCP SPT=45988 DPT=9102 SEQ=4047765736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48F6C50000000001030307) Dec 5 04:29:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64500 DF PROTO=TCP SPT=44698 DPT=9100 SEQ=2778800234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC48FEA60000000001030307) Dec 5 04:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64502 DF PROTO=TCP SPT=44698 DPT=9100 SEQ=2778800234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC490AC50000000001030307) Dec 5 04:29:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16021 DF PROTO=TCP SPT=43896 DPT=9101 SEQ=3478240861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4916850000000001030307) Dec 5 04:29:20 localhost kernel: SELinux: Converting 2759 SID table entries... Dec 5 04:29:20 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Dec 5 04:29:20 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:29:20 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:29:20 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:29:20 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:29:20 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:29:20 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:29:20 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:29:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30161 DF PROTO=TCP SPT=42442 DPT=9101 SEQ=2770703816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4922460000000001030307) Dec 5 04:29:26 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=19 res=1 Dec 5 04:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:29:26 localhost podman[164290]: 2025-12-05 09:29:26.217828 +0000 UTC m=+0.098289135 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:29:26 localhost podman[164290]: 2025-12-05 09:29:26.257681055 +0000 UTC m=+0.138142170 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 5 04:29:26 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:29:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16023 DF PROTO=TCP SPT=43896 DPT=9101 SEQ=3478240861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC492E450000000001030307) Dec 5 04:29:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64504 DF PROTO=TCP SPT=44698 DPT=9100 SEQ=2778800234 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC493A460000000001030307) Dec 5 04:29:30 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 5 04:29:30 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:29:30 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:29:30 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:29:30 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:29:30 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:29:30 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:29:30 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:29:34 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=20 res=1 Dec 5 04:29:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:29:34 localhost systemd[1]: tmp-crun.uCLIkk.mount: Deactivated successfully. Dec 5 04:29:34 localhost podman[164408]: 2025-12-05 09:29:34.221910966 +0000 UTC m=+0.100882675 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 5 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41792 DF PROTO=TCP SPT=34900 DPT=9105 SEQ=384643460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC494CC50000000001030307) Dec 5 04:29:34 localhost podman[164408]: 2025-12-05 09:29:34.257928104 +0000 UTC m=+0.136899773 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:29:34 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:29:39 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 5 04:29:39 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:29:39 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:29:39 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:29:39 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:29:39 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:29:39 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:29:39 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:29:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6683 DF PROTO=TCP SPT=32832 DPT=9102 SEQ=793162148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4967F60000000001030307) Dec 5 04:29:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6684 DF PROTO=TCP SPT=32832 DPT=9102 SEQ=793162148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC496C050000000001030307) Dec 5 04:29:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41793 DF PROTO=TCP SPT=34900 DPT=9105 SEQ=384643460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC496C450000000001030307) Dec 5 04:29:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63259 DF PROTO=TCP SPT=52096 DPT=9100 SEQ=3531588322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4973D70000000001030307) Dec 5 04:29:47 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 5 04:29:47 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:29:47 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:29:47 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:29:47 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:29:47 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:29:47 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:29:47 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63261 DF PROTO=TCP SPT=52096 DPT=9100 SEQ=3531588322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC497FC50000000001030307) Dec 5 04:29:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52477 DF PROTO=TCP SPT=43238 DPT=9101 SEQ=906056324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC498BC50000000001030307) Dec 5 04:29:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27187 DF PROTO=TCP SPT=35024 DPT=9101 SEQ=1605471281 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4998460000000001030307) Dec 5 04:29:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52479 DF PROTO=TCP SPT=43238 DPT=9101 SEQ=906056324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49A3850000000001030307) Dec 5 04:29:57 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 5 04:29:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:29:57 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:29:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:29:57 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:29:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:29:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:29:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:29:57 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=23 res=1 Dec 5 04:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:29:57 localhost systemd[1]: tmp-crun.zgNyAG.mount: Deactivated successfully. Dec 5 04:29:57 localhost podman[164454]: 2025-12-05 09:29:57.226289974 +0000 UTC m=+0.107260617 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 04:29:57 localhost podman[164454]: 2025-12-05 09:29:57.28944657 +0000 UTC m=+0.170417183 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 04:29:57 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:29:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63263 DF PROTO=TCP SPT=52096 DPT=9100 SEQ=3531588322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49B0450000000001030307) Dec 5 04:30:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:30:03.874 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:30:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:30:03.875 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:30:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:30:03.876 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47712 DF PROTO=TCP SPT=33170 DPT=9105 SEQ=1786807494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49C2060000000001030307) Dec 5 04:30:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:30:05 localhost podman[164482]: 2025-12-05 09:30:05.194335227 +0000 UTC m=+0.082370788 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 04:30:05 localhost podman[164482]: 2025-12-05 09:30:05.224507832 +0000 UTC m=+0.112543373 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 04:30:05 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:30:05 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 5 04:30:05 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:30:05 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:30:05 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:30:05 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:30:05 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:30:05 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:30:05 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:30:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52245 DF PROTO=TCP SPT=45080 DPT=9882 SEQ=3476367487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49C6460000000001030307) Dec 5 04:30:05 localhost systemd[1]: Reloading. Dec 5 04:30:05 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=24 res=1 Dec 5 04:30:05 localhost systemd-sysv-generator[164535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:30:05 localhost systemd-rc-local-generator[164529]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:30:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:30:06 localhost systemd[1]: Reloading. Dec 5 04:30:06 localhost systemd-rc-local-generator[164573]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:30:06 localhost systemd-sysv-generator[164577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:30:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:30:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2337 DF PROTO=TCP SPT=54254 DPT=9102 SEQ=1581171462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49DD270000000001030307) Dec 5 04:30:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2338 DF PROTO=TCP SPT=54254 DPT=9102 SEQ=1581171462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49E1450000000001030307) Dec 5 04:30:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21755 DF PROTO=TCP SPT=34462 DPT=9100 SEQ=344899196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49E9070000000001030307) Dec 5 04:30:15 localhost kernel: SELinux: Converting 2763 SID table entries... Dec 5 04:30:15 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 5 04:30:15 localhost kernel: SELinux: policy capability open_perms=1 Dec 5 04:30:15 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 5 04:30:15 localhost kernel: SELinux: policy capability always_check_network=0 Dec 5 04:30:15 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 5 04:30:15 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 5 04:30:15 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 5 04:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21757 DF PROTO=TCP SPT=34462 DPT=9100 SEQ=344899196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC49F5060000000001030307) Dec 5 04:30:19 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Dec 5 04:30:19 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=25 res=1 Dec 5 04:30:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22157 DF PROTO=TCP SPT=47048 DPT=9101 SEQ=835615991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A01050000000001030307) Dec 5 04:30:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16026 DF PROTO=TCP SPT=43896 DPT=9101 SEQ=3478240861 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A0C460000000001030307) Dec 5 04:30:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22159 DF PROTO=TCP SPT=47048 DPT=9101 SEQ=835615991 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A18C60000000001030307) Dec 5 04:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:30:28 localhost systemd[1]: tmp-crun.aPaK9Q.mount: Deactivated successfully. Dec 5 04:30:28 localhost podman[164658]: 2025-12-05 09:30:28.256411951 +0000 UTC m=+0.130962829 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:30:28 localhost podman[164658]: 2025-12-05 09:30:28.338772827 +0000 UTC m=+0.213323705 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 04:30:28 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:30:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21759 DF PROTO=TCP SPT=34462 DPT=9100 SEQ=344899196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A24450000000001030307) Dec 5 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28503 DF PROTO=TCP SPT=40288 DPT=9105 SEQ=1371919009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A37450000000001030307) Dec 5 04:30:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:30:36 localhost podman[167566]: 2025-12-05 09:30:36.24831955 +0000 UTC m=+0.122825483 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:30:36 localhost podman[167566]: 2025-12-05 09:30:36.279690153 +0000 UTC m=+0.154196136 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 5 04:30:36 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:30:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18436 DF PROTO=TCP SPT=60354 DPT=9102 SEQ=940763093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A52570000000001030307) Dec 5 04:30:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18437 DF PROTO=TCP SPT=60354 DPT=9102 SEQ=940763093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A56450000000001030307) Dec 5 04:30:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28504 DF PROTO=TCP SPT=40288 DPT=9105 SEQ=1371919009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A58460000000001030307) Dec 5 04:30:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47570 DF PROTO=TCP SPT=60372 DPT=9100 SEQ=1753264544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A5E360000000001030307) Dec 5 04:30:44 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 04:30:45 localhost rhsm-service[6599]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 5 04:30:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47572 DF PROTO=TCP SPT=60372 DPT=9100 SEQ=1753264544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A6A450000000001030307) Dec 5 04:30:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14508 DF PROTO=TCP SPT=38718 DPT=9101 SEQ=2335178084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A76460000000001030307) Dec 5 04:30:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52482 DF PROTO=TCP SPT=43238 DPT=9101 SEQ=906056324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A82460000000001030307) Dec 5 04:30:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14510 DF PROTO=TCP SPT=38718 DPT=9101 SEQ=2335178084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A8E050000000001030307) Dec 5 04:30:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:30:58 localhost podman[181922]: 2025-12-05 09:30:58.680606322 +0000 UTC m=+0.211796498 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:30:58 localhost podman[181922]: 2025-12-05 09:30:58.767738454 +0000 UTC m=+0.298928710 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 04:30:58 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:30:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47574 DF PROTO=TCP SPT=60372 DPT=9100 SEQ=1753264544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4A9A450000000001030307) Dec 5 04:31:02 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 5 04:31:02 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 5 04:31:02 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 5 04:31:02 localhost systemd[1]: sshd.service: Consumed 1.103s CPU time, read 32.0K from disk, written 0B to disk. Dec 5 04:31:02 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 5 04:31:02 localhost systemd[1]: Stopping sshd-keygen.target... Dec 5 04:31:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:31:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:31:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 5 04:31:02 localhost systemd[1]: Reached target sshd-keygen.target. Dec 5 04:31:02 localhost systemd[1]: Starting OpenSSH server daemon... Dec 5 04:31:02 localhost sshd[182627]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:31:02 localhost systemd[1]: Started OpenSSH server daemon. Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:31:03.875 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:31:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:31:03.876 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:31:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:31:03.878 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60353 DF PROTO=TCP SPT=52212 DPT=9105 SEQ=3158783793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4AAC850000000001030307) Dec 5 04:31:04 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 04:31:04 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 04:31:04 localhost systemd[1]: Reloading. Dec 5 04:31:04 localhost systemd-sysv-generator[182901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:04 localhost systemd-rc-local-generator[182896]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:05 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 04:31:05 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 04:31:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42492 DF PROTO=TCP SPT=52040 DPT=9882 SEQ=1652298570 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4AB0450000000001030307) Dec 5 04:31:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:31:07 localhost podman[186455]: 2025-12-05 09:31:07.294326353 +0000 UTC m=+0.174590756 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:31:07 localhost podman[186455]: 2025-12-05 09:31:07.328810577 +0000 UTC m=+0.209074950 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:31:07 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:31:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40874 DF PROTO=TCP SPT=41878 DPT=9102 SEQ=4027144770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4AC7860000000001030307) Dec 5 04:31:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40875 DF PROTO=TCP SPT=41878 DPT=9102 SEQ=4027144770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4ACB850000000001030307) Dec 5 04:31:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23081 DF PROTO=TCP SPT=35036 DPT=9100 SEQ=3972689963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4AD3670000000001030307) Dec 5 04:31:17 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 04:31:17 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 04:31:17 localhost systemd[1]: man-db-cache-update.service: Consumed 15.260s CPU time. Dec 5 04:31:17 localhost systemd[1]: run-rf23d64839dd84d73a6a1ce40160b64c8.service: Deactivated successfully. Dec 5 04:31:17 localhost systemd[1]: run-rcaa59a66978c4480a3b28f2d56acaffe.service: Deactivated successfully. Dec 5 04:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23083 DF PROTO=TCP SPT=35036 DPT=9100 SEQ=3972689963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4ADF850000000001030307) Dec 5 04:31:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19053 DF PROTO=TCP SPT=39578 DPT=9101 SEQ=3526038609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4AEB450000000001030307) Dec 5 04:31:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40411 DF PROTO=TCP SPT=35716 DPT=9882 SEQ=33099040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4AF8450000000001030307) Dec 5 04:31:24 localhost python3.9[191667]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:31:24 localhost systemd[1]: Reloading. Dec 5 04:31:24 localhost systemd-rc-local-generator[191696]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:24 localhost systemd-sysv-generator[191700]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost python3.9[191815]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:31:25 localhost systemd[1]: Reloading. Dec 5 04:31:25 localhost systemd-rc-local-generator[191839]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:25 localhost systemd-sysv-generator[191843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19055 DF PROTO=TCP SPT=39578 DPT=9101 SEQ=3526038609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B03050000000001030307) Dec 5 04:31:26 localhost python3.9[191964]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:31:27 localhost systemd[1]: Reloading. Dec 5 04:31:27 localhost systemd-rc-local-generator[191991]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:27 localhost systemd-sysv-generator[191996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost python3.9[192113]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:31:28 localhost systemd[1]: Reloading. Dec 5 04:31:28 localhost systemd-rc-local-generator[192137]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:28 localhost systemd-sysv-generator[192145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:31:29 localhost systemd[1]: tmp-crun.rnalUb.mount: Deactivated successfully. Dec 5 04:31:29 localhost podman[192247]: 2025-12-05 09:31:29.220740713 +0000 UTC m=+0.091728001 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:31:29 localhost podman[192247]: 2025-12-05 09:31:29.296853831 +0000 UTC m=+0.167841089 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:31:29 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:31:29 localhost python3.9[192273]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:29 localhost systemd[1]: Reloading. Dec 5 04:31:29 localhost systemd-sysv-generator[192319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:29 localhost systemd-rc-local-generator[192313]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23085 DF PROTO=TCP SPT=35036 DPT=9100 SEQ=3972689963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B10460000000001030307) Dec 5 04:31:30 localhost python3.9[192435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:30 localhost systemd[1]: Reloading. Dec 5 04:31:30 localhost systemd-rc-local-generator[192499]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:30 localhost systemd-sysv-generator[192504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost python3.9[192652]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:31 localhost systemd[1]: Reloading. Dec 5 04:31:31 localhost systemd-sysv-generator[192701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:31 localhost systemd-rc-local-generator[192696]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:31 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:32 localhost python3.9[192819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:33 localhost python3.9[192932]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:33 localhost systemd[1]: Reloading. Dec 5 04:31:33 localhost systemd-rc-local-generator[192961]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:33 localhost systemd-sysv-generator[192965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8576 DF PROTO=TCP SPT=45850 DPT=9105 SEQ=875386499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B21850000000001030307) Dec 5 04:31:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26817 DF PROTO=TCP SPT=41268 DPT=9882 SEQ=2038722676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B26450000000001030307) Dec 5 04:31:37 localhost python3.9[193080]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:31:37 localhost systemd[1]: Reloading. Dec 5 04:31:37 localhost podman[193082]: 2025-12-05 09:31:37.642562124 +0000 UTC m=+0.098627732 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:31:37 localhost podman[193082]: 2025-12-05 09:31:37.679080474 +0000 UTC m=+0.135146092 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:31:37 localhost systemd-rc-local-generator[193126]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:31:37 localhost systemd-sysv-generator[193131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:31:37 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:31:38 localhost python3.9[193246]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:39 localhost python3.9[193359]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:40 localhost python3.9[193472]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9608 DF PROTO=TCP SPT=38082 DPT=9102 SEQ=786115157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B3CB60000000001030307) Dec 5 04:31:41 localhost python3.9[193585]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9609 DF PROTO=TCP SPT=38082 DPT=9102 SEQ=786115157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B40C50000000001030307) Dec 5 04:31:42 localhost python3.9[193698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:43 localhost python3.9[193811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20093 DF PROTO=TCP SPT=40728 DPT=9100 SEQ=2142035352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B48970000000001030307) Dec 5 04:31:45 localhost python3.9[193924]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:46 localhost python3.9[194037]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20095 DF PROTO=TCP SPT=40728 DPT=9100 SEQ=2142035352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B54850000000001030307) Dec 5 04:31:47 localhost python3.9[194150]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:48 localhost python3.9[194263]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:49 localhost python3.9[194376]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:50 localhost python3.9[194489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22946 DF PROTO=TCP SPT=44376 DPT=9101 SEQ=3922465184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B60850000000001030307) Dec 5 04:31:50 localhost python3.9[194602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:52 localhost python3.9[194715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 5 04:31:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14513 DF PROTO=TCP SPT=38718 DPT=9101 SEQ=2335178084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B6C450000000001030307) Dec 5 04:31:54 localhost python3.9[194828]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:31:55 localhost python3.9[194938]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:31:56 localhost python3.9[195048]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:31:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9612 DF PROTO=TCP SPT=38082 DPT=9102 SEQ=786115157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B78450000000001030307) Dec 5 04:31:56 localhost python3.9[195158]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:31:57 localhost python3.9[195268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:31:58 localhost python3.9[195378]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:31:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20097 DF PROTO=TCP SPT=40728 DPT=9100 SEQ=2142035352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B84460000000001030307) Dec 5 04:32:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:32:00 localhost podman[195488]: 2025-12-05 09:32:00.173281739 +0000 UTC m=+0.088500000 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:32:00 localhost podman[195488]: 2025-12-05 09:32:00.22066567 +0000 UTC m=+0.135883921 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:32:00 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:32:00 localhost python3.9[195489]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:00 localhost python3.9[195603]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927118.8000896-1643-16915000839073/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:01 localhost python3.9[195713]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:02 localhost python3.9[195803]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927121.1207883-1643-75277677103604/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:02 localhost python3.9[195913]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:03 localhost python3.9[196003]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927122.2822382-1643-218101798133732/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:32:03.876 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:32:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:32:03.876 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:32:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:32:03.878 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:32:03 localhost python3.9[196113]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=253 DF PROTO=TCP SPT=48114 DPT=9105 SEQ=2900346503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4B96C50000000001030307) Dec 5 04:32:04 localhost python3.9[196203]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927123.465846-1643-106030836993361/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:05 localhost python3.9[196313]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:32:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55692b2d82d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 5 04:32:05 localhost python3.9[196403]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927124.639239-1643-138324518705794/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:06 localhost python3.9[196513]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:06 localhost python3.9[196603]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927125.8558629-1643-107328758966782/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:07 localhost python3.9[196713]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:32:08 localhost python3.9[196801]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927127.0943341-1643-76391031848265/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:08 localhost systemd[1]: tmp-crun.0nFvtL.mount: Deactivated successfully. Dec 5 04:32:08 localhost podman[196802]: 2025-12-05 09:32:08.207968994 +0000 UTC m=+0.092917145 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:32:08 localhost podman[196802]: 2025-12-05 09:32:08.239634703 +0000 UTC m=+0.124582904 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 5 04:32:08 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:32:08 localhost python3.9[196930]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:32:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c76342a2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 5 04:32:09 localhost python3.9[197020]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764927128.2553854-1643-97248572402479/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:10 localhost python3.9[197130]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11205 DF PROTO=TCP SPT=58548 DPT=9102 SEQ=1985420972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BB1E60000000001030307) Dec 5 04:32:11 localhost python3.9[197240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11206 DF PROTO=TCP SPT=58548 DPT=9102 SEQ=1985420972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BB6050000000001030307) Dec 5 04:32:12 localhost python3.9[197350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=254 DF PROTO=TCP SPT=48114 DPT=9105 SEQ=2900346503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BB6460000000001030307) Dec 5 04:32:12 localhost python3.9[197460]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:13 localhost python3.9[197570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:14 localhost python3.9[197680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37332 DF PROTO=TCP SPT=46744 DPT=9100 SEQ=1528185783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BBDC70000000001030307) Dec 5 04:32:14 localhost python3.9[197790]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:15 localhost python3.9[197900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:15 localhost python3.9[198010]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:16 localhost python3.9[198120]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:16 localhost python3.9[198230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37334 DF PROTO=TCP SPT=46744 DPT=9100 SEQ=1528185783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BC9C50000000001030307) Dec 5 04:32:17 localhost python3.9[198340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:18 localhost python3.9[198450]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:18 localhost python3.9[198560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:19 localhost python3.9[198670]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9150 DF PROTO=TCP SPT=58438 DPT=9101 SEQ=3844620467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BD5C50000000001030307) Dec 5 04:32:21 localhost python3.9[198780]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:21 localhost python3.9[198868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927140.6461387-2306-247243424778087/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:22 localhost python3.9[198978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:23 localhost python3.9[199066]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927142.2383864-2306-206637941697689/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19058 DF PROTO=TCP SPT=39578 DPT=9101 SEQ=3526038609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BE2460000000001030307) Dec 5 04:32:23 localhost python3.9[199176]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:24 localhost python3.9[199264]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927143.329289-2306-79565119569455/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:24 localhost python3.9[199374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:25 localhost python3.9[199462]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927144.4670696-2306-87635886826815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:26 localhost python3.9[199572]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9152 DF PROTO=TCP SPT=58438 DPT=9101 SEQ=3844620467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BED850000000001030307) Dec 5 04:32:26 localhost python3.9[199660]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927145.804781-2306-88795655676394/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:27 localhost python3.9[199770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:28 localhost python3.9[199858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927146.9768143-2306-60929099285794/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:28 localhost python3.9[199968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:29 localhost python3.9[200056]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927148.1985245-2306-163366577519402/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37336 DF PROTO=TCP SPT=46744 DPT=9100 SEQ=1528185783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4BFA460000000001030307) Dec 5 04:32:29 localhost python3.9[200166]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:30 localhost python3.9[200254]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927149.3544443-2306-248252169016355/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:32:31 localhost podman[200310]: 2025-12-05 09:32:31.197817929 +0000 UTC m=+0.084689553 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 04:32:31 localhost podman[200310]: 2025-12-05 09:32:31.30071626 +0000 UTC m=+0.187587884 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:32:31 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:32:31 localhost python3.9[200388]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:32 localhost python3.9[200476]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927150.6447237-2306-160892084793193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:32 localhost python3.9[200622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:33 localhost python3.9[200743]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927152.3403628-2306-124678598977616/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22963 DF PROTO=TCP SPT=54290 DPT=9105 SEQ=3342430987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C0C050000000001030307) Dec 5 04:32:34 localhost python3.9[200871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:34 localhost python3.9[200959]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927153.9126892-2306-62344641296113/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55586 DF PROTO=TCP SPT=54304 DPT=9882 SEQ=4099149276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C10460000000001030307) Dec 5 04:32:35 localhost python3.9[201069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:36 localhost python3.9[201157]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927155.0079958-2306-150025265226327/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:36 localhost python3.9[201267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:37 localhost python3.9[201355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927156.166261-2306-251897546324205/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:37 localhost python3.9[201465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:38 localhost python3.9[201553]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927157.274775-2306-178040744711294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:38 localhost python3.9[201661]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:32:39 localhost systemd[1]: tmp-crun.h8xH9P.mount: Deactivated successfully. Dec 5 04:32:39 localhost podman[201698]: 2025-12-05 09:32:39.197910025 +0000 UTC m=+0.082421603 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:32:39 localhost podman[201698]: 2025-12-05 09:32:39.230547985 +0000 UTC m=+0.115059503 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:32:39 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:32:39 localhost python3.9[201793]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 5 04:32:40 localhost python3.9[201903]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:32:40 localhost systemd[1]: Reloading. Dec 5 04:32:40 localhost systemd-rc-local-generator[201926]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:32:40 localhost systemd-sysv-generator[201929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58420 DF PROTO=TCP SPT=60490 DPT=9102 SEQ=3367393122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C27160000000001030307) Dec 5 04:32:41 localhost systemd[1]: Starting libvirt logging daemon socket... Dec 5 04:32:41 localhost systemd[1]: Listening on libvirt logging daemon socket. Dec 5 04:32:41 localhost systemd[1]: Starting libvirt logging daemon admin socket... Dec 5 04:32:41 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Dec 5 04:32:41 localhost systemd[1]: Starting libvirt logging daemon... Dec 5 04:32:41 localhost systemd[1]: Started libvirt logging daemon. Dec 5 04:32:42 localhost python3.9[202055]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:32:42 localhost systemd[1]: Reloading. Dec 5 04:32:42 localhost systemd-rc-local-generator[202081]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:32:42 localhost systemd-sysv-generator[202086]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58421 DF PROTO=TCP SPT=60490 DPT=9102 SEQ=3367393122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C2B060000000001030307) Dec 5 04:32:42 localhost systemd[1]: Starting libvirt nodedev daemon socket... Dec 5 04:32:42 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Dec 5 04:32:42 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Dec 5 04:32:42 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Dec 5 04:32:42 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Dec 5 04:32:42 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Dec 5 04:32:42 localhost systemd[1]: Started libvirt nodedev daemon. Dec 5 04:32:43 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 5 04:32:43 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 5 04:32:43 localhost python3.9[202232]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:32:43 localhost setroubleshoot[202206]: Deleting alert 3c8dc1e6-9e80-435a-812f-bbc12a97dc29, it is allowed in current policy Dec 5 04:32:43 localhost systemd[1]: Reloading. Dec 5 04:32:43 localhost systemd-rc-local-generator[202259]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:32:43 localhost systemd-sysv-generator[202263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:32:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:32:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:44 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Dec 5 04:32:44 localhost systemd[1]: Starting libvirt proxy daemon socket... Dec 5 04:32:44 localhost systemd[1]: Listening on libvirt proxy daemon socket. Dec 5 04:32:44 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Dec 5 04:32:44 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Dec 5 04:32:44 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Dec 5 04:32:44 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Dec 5 04:32:44 localhost systemd[1]: Started libvirt proxy daemon. Dec 5 04:32:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28846 DF PROTO=TCP SPT=35644 DPT=9100 SEQ=2538408496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C32F60000000001030307) Dec 5 04:32:44 localhost python3.9[202412]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:32:44 localhost systemd[1]: Reloading. Dec 5 04:32:45 localhost systemd-sysv-generator[202439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:32:45 localhost systemd-rc-local-generator[202436]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:32:45 localhost setroubleshoot[202206]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d04e38bc-c529-4160-bbee-3f243ce38ed3 Dec 5 04:32:45 localhost setroubleshoot[202206]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 5 04:32:45 localhost setroubleshoot[202206]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d04e38bc-c529-4160-bbee-3f243ce38ed3 Dec 5 04:32:45 localhost setroubleshoot[202206]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:45 localhost systemd[1]: Listening on libvirt locking daemon socket. Dec 5 04:32:45 localhost systemd[1]: Starting libvirt QEMU daemon socket... Dec 5 04:32:45 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Dec 5 04:32:45 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Dec 5 04:32:45 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Dec 5 04:32:45 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Dec 5 04:32:45 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Dec 5 04:32:45 localhost systemd[1]: Started libvirt QEMU daemon. Dec 5 04:32:46 localhost python3.9[202595]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:32:46 localhost systemd[1]: Reloading. Dec 5 04:32:46 localhost systemd-sysv-generator[202629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:32:46 localhost systemd-rc-local-generator[202626]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:32:46 localhost systemd[1]: Starting libvirt secret daemon socket... Dec 5 04:32:46 localhost systemd[1]: Listening on libvirt secret daemon socket. Dec 5 04:32:46 localhost systemd[1]: Starting libvirt secret daemon admin socket... Dec 5 04:32:46 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Dec 5 04:32:46 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Dec 5 04:32:46 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Dec 5 04:32:46 localhost systemd[1]: Started libvirt secret daemon. Dec 5 04:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28848 DF PROTO=TCP SPT=35644 DPT=9100 SEQ=2538408496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C3F060000000001030307) Dec 5 04:32:47 localhost python3.9[202777]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:48 localhost python3.9[202887]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:32:48 localhost python3.9[202997]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:32:49 localhost python3.9[203109]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:32:50 localhost python3.9[203217]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42616 DF PROTO=TCP SPT=35462 DPT=9101 SEQ=1003624266 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C4B050000000001030307) Dec 5 04:32:50 localhost python3.9[203303]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927169.977147-3170-144866998541249/.source.xml follow=False _original_basename=secret.xml.j2 checksum=70808ffe10e7f01d2f96ff948de5899db3cbf084 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:51 localhost python3.9[203413]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 79feddb1-4bfc-557f-83b9-0d57c9f66c1b#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:32:52 localhost python3.9[203533]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22951 DF PROTO=TCP SPT=44376 DPT=9101 SEQ=3922465184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C56450000000001030307) Dec 5 04:32:55 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Dec 5 04:32:55 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 5 04:32:55 localhost python3.9[203870]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:56 localhost python3.9[203980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58424 DF PROTO=TCP SPT=60490 DPT=9102 SEQ=3367393122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C62450000000001030307) Dec 5 04:32:56 localhost python3.9[204068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927175.9006288-3335-264268595048607/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:57 localhost python3.9[204178]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:58 localhost python3.9[204288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:58 localhost python3.9[204345]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:32:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28850 DF PROTO=TCP SPT=35644 DPT=9100 SEQ=2538408496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C6E460000000001030307) Dec 5 04:32:59 localhost python3.9[204455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:32:59 localhost python3.9[204512]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.jschar20 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:00 localhost python3.9[204622]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:01 localhost python3.9[204679]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:33:01 localhost systemd[1]: tmp-crun.5mfouf.mount: Deactivated successfully. Dec 5 04:33:01 localhost podman[204790]: 2025-12-05 09:33:01.898938545 +0000 UTC m=+0.111224683 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller) Dec 5 04:33:01 localhost python3.9[204789]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:01 localhost podman[204790]: 2025-12-05 09:33:01.982736748 +0000 UTC m=+0.195022896 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 5 04:33:01 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:33:02 localhost python3[204923]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 5 04:33:03 localhost python3.9[205033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:33:03.877 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:33:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:33:03.877 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:33:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:33:03.878 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:33:04 localhost python3.9[205090]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24343 DF PROTO=TCP SPT=41210 DPT=9105 SEQ=2068616348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C81450000000001030307) Dec 5 04:33:05 localhost python3.9[205200]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:05 localhost python3.9[205257]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:06 localhost python3.9[205367]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:06 localhost python3.9[205424]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:07 localhost python3.9[205534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:07 localhost python3.9[205591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:08 localhost python3.9[205701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:33:09 localhost podman[205791]: 2025-12-05 09:33:09.424565653 +0000 UTC m=+0.087177054 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent) Dec 5 04:33:09 localhost podman[205791]: 2025-12-05 09:33:09.459557285 +0000 UTC m=+0.122168666 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent) Dec 5 04:33:09 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:33:09 localhost python3.9[205792]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927188.2719023-3710-24905302799633/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:10 localhost python3.9[205919]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:10 localhost python3.9[206029]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25107 DF PROTO=TCP SPT=49000 DPT=9102 SEQ=2447328793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4C9C460000000001030307) Dec 5 04:33:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25108 DF PROTO=TCP SPT=49000 DPT=9102 SEQ=2447328793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CA0450000000001030307) Dec 5 04:33:12 localhost python3.9[206142]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24344 DF PROTO=TCP SPT=41210 DPT=9105 SEQ=2068616348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CA2450000000001030307) Dec 5 04:33:13 localhost python3.9[206252]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45593 DF PROTO=TCP SPT=34480 DPT=9100 SEQ=2896501481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CA8270000000001030307) Dec 5 04:33:14 localhost python3.9[206363]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:33:15 localhost python3.9[206475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:16 localhost python3.9[206588]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:17 localhost python3.9[206698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45595 DF PROTO=TCP SPT=34480 DPT=9100 SEQ=2896501481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CB4450000000001030307) Dec 5 04:33:17 localhost python3.9[206786]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927196.7267473-3926-77848119353906/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:18 localhost python3.9[206896]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:18 localhost python3.9[206984]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927197.9415703-3971-38321758335063/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:19 localhost python3.9[207094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:33:20 localhost python3.9[207182]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927199.1316583-4016-249685727091505/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=196 DF PROTO=TCP SPT=33370 DPT=9101 SEQ=4013125185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CC0050000000001030307) Dec 5 04:33:20 localhost python3.9[207292]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:33:20 localhost systemd[1]: Reloading. Dec 5 04:33:21 localhost systemd-rc-local-generator[207317]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:33:21 localhost systemd-sysv-generator[207321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:21 localhost systemd[1]: Reached target edpm_libvirt.target. Dec 5 04:33:22 localhost python3.9[207441]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 5 04:33:22 localhost systemd[1]: Reloading. Dec 5 04:33:22 localhost systemd-sysv-generator[207467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:33:22 localhost systemd-rc-local-generator[207464]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: Reloading. Dec 5 04:33:22 localhost systemd-sysv-generator[207509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:33:22 localhost systemd-rc-local-generator[207506]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:23 localhost systemd[1]: session-52.scope: Deactivated successfully. Dec 5 04:33:23 localhost systemd[1]: session-52.scope: Consumed 3min 40.332s CPU time. Dec 5 04:33:23 localhost systemd-logind[760]: Session 52 logged out. Waiting for processes to exit. Dec 5 04:33:23 localhost systemd-logind[760]: Removed session 52. Dec 5 04:33:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9155 DF PROTO=TCP SPT=58438 DPT=9101 SEQ=3844620467 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CCC460000000001030307) Dec 5 04:33:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=198 DF PROTO=TCP SPT=33370 DPT=9101 SEQ=4013125185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CD7C60000000001030307) Dec 5 04:33:28 localhost sshd[207533]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:33:28 localhost systemd-logind[760]: New session 53 of user zuul. Dec 5 04:33:28 localhost systemd[1]: Started Session 53 of User zuul. Dec 5 04:33:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45597 DF PROTO=TCP SPT=34480 DPT=9100 SEQ=2896501481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CE4460000000001030307) Dec 5 04:33:29 localhost python3.9[207644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:33:31 localhost python3.9[207756]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:33:31 localhost network[207773]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:33:31 localhost network[207774]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:33:31 localhost network[207775]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:33:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:33:32 localhost podman[207790]: 2025-12-05 09:33:32.130291626 +0000 UTC m=+0.070888863 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:33:32 localhost podman[207790]: 2025-12-05 09:33:32.21851244 +0000 UTC m=+0.159109677 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:33:32 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:33:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35990 DF PROTO=TCP SPT=56436 DPT=9105 SEQ=1364007836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CF6450000000001030307) Dec 5 04:33:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4980 DF PROTO=TCP SPT=60700 DPT=9882 SEQ=1735154065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4CFA460000000001030307) Dec 5 04:33:35 localhost ovn_controller[153000]: 2025-12-05T09:33:35Z|00047|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:33:36 localhost ovn_controller[153000]: 2025-12-05T09:33:36Z|00048|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:33:37 localhost ovn_controller[153000]: 2025-12-05T09:33:37Z|00049|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:33:39 localhost python3.9[208120]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:33:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:33:40 localhost podman[208184]: 2025-12-05 09:33:40.204617566 +0000 UTC m=+0.100891558 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:33:40 localhost podman[208184]: 2025-12-05 09:33:40.213525989 +0000 UTC m=+0.109800021 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:33:40 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:33:40 localhost python3.9[208183]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:33:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7483 DF PROTO=TCP SPT=43522 DPT=9102 SEQ=961384155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D11760000000001030307) Dec 5 04:33:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7484 DF PROTO=TCP SPT=43522 DPT=9102 SEQ=961384155 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D15850000000001030307) Dec 5 04:33:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5103 DF PROTO=TCP SPT=49364 DPT=9100 SEQ=3313727746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D1D580000000001030307) Dec 5 04:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5105 DF PROTO=TCP SPT=49364 DPT=9100 SEQ=3313727746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D29450000000001030307) Dec 5 04:33:49 localhost python3.9[208312]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:33:50 localhost python3.9[208424]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12765 DF PROTO=TCP SPT=56054 DPT=9101 SEQ=1654475273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D35450000000001030307) Dec 5 04:33:50 localhost python3.9[208534]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:51 localhost python3.9[208645]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:52 localhost python3.9[208756]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:33:53 localhost python3.9[208867]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:33:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8304 DF PROTO=TCP SPT=59664 DPT=9882 SEQ=1249391801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D42450000000001030307) Dec 5 04:33:54 localhost python3.9[208979]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:33:56 localhost python3.9[209089]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:33:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12767 DF PROTO=TCP SPT=56054 DPT=9101 SEQ=1654475273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D4D050000000001030307) Dec 5 04:33:57 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Dec 5 04:33:58 localhost python3.9[209203]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:33:58 localhost systemd[1]: Reloading. Dec 5 04:33:58 localhost systemd-rc-local-generator[209228]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:33:58 localhost systemd-sysv-generator[209233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:33:58 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Dec 5 04:33:58 localhost systemd[1]: Starting Open-iSCSI... Dec 5 04:33:58 localhost iscsid[209244]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 5 04:33:58 localhost iscsid[209244]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 5 04:33:58 localhost iscsid[209244]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 5 04:33:58 localhost iscsid[209244]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 5 04:33:58 localhost iscsid[209244]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 5 04:33:58 localhost iscsid[209244]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 5 04:33:58 localhost iscsid[209244]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Dec 5 04:33:58 localhost systemd[1]: Started Open-iSCSI. Dec 5 04:33:58 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Dec 5 04:33:58 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Dec 5 04:33:59 localhost python3.9[209356]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:33:59 localhost network[209373]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:33:59 localhost network[209374]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:33:59 localhost network[209375]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:33:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5107 DF PROTO=TCP SPT=49364 DPT=9100 SEQ=3313727746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D5A460000000001030307) Dec 5 04:34:00 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 5 04:34:00 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 5 04:34:01 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service. Dec 5 04:34:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6c12f870-71a8-490f-9314-b61ab0c0900c Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6c12f870-71a8-490f-9314-b61ab0c0900c Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6c12f870-71a8-490f-9314-b61ab0c0900c Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6c12f870-71a8-490f-9314-b61ab0c0900c Dec 5 04:34:01 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 5 04:34:02 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6c12f870-71a8-490f-9314-b61ab0c0900c Dec 5 04:34:02 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 5 04:34:02 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 6c12f870-71a8-490f-9314-b61ab0c0900c Dec 5 04:34:02 localhost setroubleshoot[209392]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 5 04:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:34:02 localhost systemd[1]: tmp-crun.UizDwS.mount: Deactivated successfully. Dec 5 04:34:02 localhost podman[209488]: 2025-12-05 09:34:02.358422803 +0000 UTC m=+0.092352352 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 04:34:02 localhost podman[209488]: 2025-12-05 09:34:02.438670552 +0000 UTC m=+0.172600141 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 5 04:34:02 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:34:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:34:03.878 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:34:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:34:03.878 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:34:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:34:03.880 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34747 DF PROTO=TCP SPT=35384 DPT=9105 SEQ=2861644346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D6B850000000001030307) Dec 5 04:34:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38868 DF PROTO=TCP SPT=54858 DPT=9882 SEQ=687556444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D70450000000001030307) Dec 5 04:34:06 localhost python3.9[209649]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 04:34:07 localhost python3.9[209759]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 5 04:34:08 localhost python3.9[209873]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:09 localhost python3.9[209961]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927247.8657675-455-218454769476066/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:09 localhost python3.9[210071]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:34:10 localhost podman[210182]: 2025-12-05 09:34:10.625607181 +0000 UTC m=+0.066609350 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent) Dec 5 04:34:10 localhost podman[210182]: 2025-12-05 09:34:10.635496265 +0000 UTC m=+0.076498434 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:34:10 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:34:10 localhost python3.9[210181]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:34:10 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 5 04:34:10 localhost systemd[1]: Stopped Load Kernel Modules. Dec 5 04:34:10 localhost systemd[1]: Stopping Load Kernel Modules... Dec 5 04:34:10 localhost systemd[1]: Starting Load Kernel Modules... Dec 5 04:34:10 localhost systemd-modules-load[210203]: Module 'msr' is built in Dec 5 04:34:10 localhost systemd[1]: Finished Load Kernel Modules. Dec 5 04:34:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52498 DF PROTO=TCP SPT=41756 DPT=9102 SEQ=3879930318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D86A70000000001030307) Dec 5 04:34:11 localhost python3.9[210313]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:12 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service: Deactivated successfully. Dec 5 04:34:12 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 5 04:34:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52499 DF PROTO=TCP SPT=41756 DPT=9102 SEQ=3879930318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D8AC60000000001030307) Dec 5 04:34:12 localhost python3.9[210423]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:34:13 localhost python3.9[210533]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:34:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58474 DF PROTO=TCP SPT=35706 DPT=9100 SEQ=3721108906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D92870000000001030307) Dec 5 04:34:14 localhost python3.9[210643]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:15 localhost python3.9[210731]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927254.250438-629-176055280276447/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:15 localhost python3.9[210841]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:34:16 localhost python3.9[210952]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58476 DF PROTO=TCP SPT=35706 DPT=9100 SEQ=3721108906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4D9E850000000001030307) Dec 5 04:34:17 localhost python3.9[211062]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:18 localhost python3.9[211172]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:19 localhost python3.9[211282]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:19 localhost python3.9[211392]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9321 DF PROTO=TCP SPT=49978 DPT=9101 SEQ=1531763053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DAA850000000001030307) Dec 5 04:34:20 localhost python3.9[211502]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:21 localhost python3.9[211612]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:21 localhost python3.9[211722]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:34:22 localhost python3.9[211834]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:23 localhost python3.9[211944]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=201 DF PROTO=TCP SPT=33370 DPT=9101 SEQ=4013125185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DB6460000000001030307) Dec 5 04:34:24 localhost python3.9[212054]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:24 localhost python3.9[212111]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:25 localhost python3.9[212221]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:26 localhost python3.9[212278]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9323 DF PROTO=TCP SPT=49978 DPT=9101 SEQ=1531763053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DC2450000000001030307) Dec 5 04:34:26 localhost python3.9[212388]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:28 localhost python3.9[212498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:28 localhost python3.9[212555]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:29 localhost python3.9[212665]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58478 DF PROTO=TCP SPT=35706 DPT=9100 SEQ=3721108906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DCE460000000001030307) Dec 5 04:34:29 localhost python3.9[212722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:30 localhost python3.9[212832]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:34:30 localhost systemd[1]: Reloading. Dec 5 04:34:30 localhost systemd-rc-local-generator[212853]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:34:30 localhost systemd-sysv-generator[212856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:31 localhost python3.9[212980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:31 localhost python3.9[213037]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:32 localhost python3.9[213147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:34:33 localhost podman[213205]: 2025-12-05 09:34:33.008367081 +0000 UTC m=+0.091035911 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125) Dec 5 04:34:33 localhost podman[213205]: 2025-12-05 09:34:33.075650401 +0000 UTC m=+0.158319211 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 04:34:33 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:34:33 localhost python3.9[213204]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:33 localhost python3.9[213339]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:34:33 localhost systemd[1]: Reloading. Dec 5 04:34:33 localhost systemd-sysv-generator[213367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:34:33 localhost systemd-rc-local-generator[213364]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:34 localhost systemd[1]: Starting Create netns directory... Dec 5 04:34:34 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 04:34:34 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:34:34 localhost systemd[1]: Finished Create netns directory. Dec 5 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53373 DF PROTO=TCP SPT=36018 DPT=9105 SEQ=4197391221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DE0C50000000001030307) Dec 5 04:34:35 localhost python3.9[213491]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:36 localhost python3.9[213601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:36 localhost python3.9[213689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927275.7500534-1250-49303264512281/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:37 localhost python3.9[213850]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:34:38 localhost python3.9[213977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:34:39 localhost python3.9[214079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927278.1309614-1325-91191229067671/.source.json _original_basename=.y1463lra follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:39 localhost python3.9[214193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:34:41 localhost systemd[1]: tmp-crun.WLJ574.mount: Deactivated successfully. Dec 5 04:34:41 localhost podman[214392]: 2025-12-05 09:34:41.034018177 +0000 UTC m=+0.096839110 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:34:41 localhost podman[214392]: 2025-12-05 09:34:41.066624251 +0000 UTC m=+0.129445214 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:34:41 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:34:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35142 DF PROTO=TCP SPT=35960 DPT=9102 SEQ=836858159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DFBD60000000001030307) Dec 5 04:34:42 localhost python3.9[214519]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 5 04:34:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35143 DF PROTO=TCP SPT=35960 DPT=9102 SEQ=836858159 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4DFFC50000000001030307) Dec 5 04:34:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53374 DF PROTO=TCP SPT=36018 DPT=9105 SEQ=4197391221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E00450000000001030307) Dec 5 04:34:42 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Dec 5 04:34:43 localhost python3.9[214630]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:34:43 localhost python3.9[214740]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 04:34:44 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Dec 5 04:34:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10943 DF PROTO=TCP SPT=50074 DPT=9100 SEQ=3572747658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E07B70000000001030307) Dec 5 04:34:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10945 DF PROTO=TCP SPT=50074 DPT=9100 SEQ=3572747658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E13C50000000001030307) Dec 5 04:34:48 localhost python3[214878]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:34:50 localhost podman[214892]: 2025-12-05 09:34:48.795869576 +0000 UTC m=+0.056135608 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 5 04:34:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50782 DF PROTO=TCP SPT=47934 DPT=9101 SEQ=3284089656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E1FC60000000001030307) Dec 5 04:34:50 localhost podman[214942]: Dec 5 04:34:50 localhost podman[214942]: 2025-12-05 09:34:50.574805379 +0000 UTC m=+0.083562972 container create 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:34:50 localhost podman[214942]: 2025-12-05 09:34:50.537575714 +0000 UTC m=+0.046333317 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 5 04:34:50 localhost python3[214878]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 5 04:34:51 localhost python3.9[215089]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:34:52 localhost python3.9[215201]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:52 localhost python3.9[215256]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:34:53 localhost python3.9[215365]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927292.9494693-1589-208110726934496/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:34:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12770 DF PROTO=TCP SPT=56054 DPT=9101 SEQ=1654475273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E2C450000000001030307) Dec 5 04:34:54 localhost python3.9[215420]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:34:54 localhost systemd[1]: Reloading. Dec 5 04:34:54 localhost systemd-rc-local-generator[215443]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:34:54 localhost systemd-sysv-generator[215446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:54 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 5 04:34:55 localhost python3.9[215511]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:34:56 localhost systemd[1]: Reloading. Dec 5 04:34:56 localhost systemd-rc-local-generator[215541]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:34:56 localhost systemd-sysv-generator[215545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:34:56 localhost systemd[1]: Starting multipathd container... Dec 5 04:34:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50784 DF PROTO=TCP SPT=47934 DPT=9101 SEQ=3284089656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E37860000000001030307) Dec 5 04:34:56 localhost systemd[1]: tmp-crun.Hkda3c.mount: Deactivated successfully. Dec 5 04:34:56 localhost systemd[1]: Started libcrun container. Dec 5 04:34:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063aa84c90e19261efcd79a83ca39435b018733231b35f5b2ac8d7439cb621c3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 5 04:34:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063aa84c90e19261efcd79a83ca39435b018733231b35f5b2ac8d7439cb621c3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 04:34:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:34:56 localhost podman[215553]: 2025-12-05 09:34:56.611159209 +0000 UTC m=+0.147237011 container init 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 5 04:34:56 localhost multipathd[215565]: + sudo -E kolla_set_configs Dec 5 04:34:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:34:56 localhost podman[215553]: 2025-12-05 09:34:56.655754251 +0000 UTC m=+0.191832043 container start 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3) Dec 5 04:34:56 localhost podman[215553]: multipathd Dec 5 04:34:56 localhost systemd[1]: Started multipathd container. Dec 5 04:34:56 localhost multipathd[215565]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:34:56 localhost multipathd[215565]: INFO:__main__:Validating config file Dec 5 04:34:56 localhost multipathd[215565]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:34:56 localhost multipathd[215565]: INFO:__main__:Writing out command to execute Dec 5 04:34:56 localhost multipathd[215565]: ++ cat /run_command Dec 5 04:34:56 localhost multipathd[215565]: + CMD='/usr/sbin/multipathd -d' Dec 5 04:34:56 localhost multipathd[215565]: + ARGS= Dec 5 04:34:56 localhost multipathd[215565]: + sudo kolla_copy_cacerts Dec 5 04:34:56 localhost multipathd[215565]: + [[ ! -n '' ]] Dec 5 04:34:56 localhost multipathd[215565]: + . kolla_extend_start Dec 5 04:34:56 localhost multipathd[215565]: Running command: '/usr/sbin/multipathd -d' Dec 5 04:34:56 localhost multipathd[215565]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 5 04:34:56 localhost multipathd[215565]: + umask 0022 Dec 5 04:34:56 localhost multipathd[215565]: + exec /usr/sbin/multipathd -d Dec 5 04:34:56 localhost multipathd[215565]: 10411.916996 | --------start up-------- Dec 5 04:34:56 localhost multipathd[215565]: 10411.917010 | read /etc/multipath.conf Dec 5 04:34:56 localhost multipathd[215565]: 10411.921032 | path checkers start up Dec 5 04:34:56 localhost podman[215574]: 2025-12-05 09:34:56.752399204 +0000 UTC m=+0.090882566 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 04:34:56 localhost podman[215574]: 2025-12-05 09:34:56.760298068 +0000 UTC m=+0.098781390 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:34:56 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:34:57 localhost python3.9[215708]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:34:58 localhost python3.9[215820]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:34:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10947 DF PROTO=TCP SPT=50074 DPT=9100 SEQ=3572747658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E44450000000001030307) Dec 5 04:34:59 localhost python3.9[215943]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:34:59 localhost systemd[1]: Stopping multipathd container... Dec 5 04:34:59 localhost multipathd[215565]: 10415.070423 | exit (signal) Dec 5 04:34:59 localhost multipathd[215565]: 10415.071385 | --------shut down------- Dec 5 04:34:59 localhost systemd[1]: libpod-8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.scope: Deactivated successfully. Dec 5 04:34:59 localhost podman[215947]: 2025-12-05 09:34:59.926963296 +0000 UTC m=+0.096755529 container died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:34:59 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.timer: Deactivated successfully. Dec 5 04:34:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173-userdata-shm.mount: Deactivated successfully. Dec 5 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-063aa84c90e19261efcd79a83ca39435b018733231b35f5b2ac8d7439cb621c3-merged.mount: Deactivated successfully. Dec 5 04:35:00 localhost podman[215947]: 2025-12-05 09:35:00.126887472 +0000 UTC m=+0.296679705 container cleanup 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 5 04:35:00 localhost podman[215947]: multipathd Dec 5 04:35:00 localhost podman[215974]: 2025-12-05 09:35:00.222796284 +0000 UTC m=+0.057012921 container cleanup 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Dec 5 04:35:00 localhost podman[215974]: multipathd Dec 5 04:35:00 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Dec 5 04:35:00 localhost systemd[1]: Stopped multipathd container. Dec 5 04:35:00 localhost systemd[1]: Starting multipathd container... Dec 5 04:35:00 localhost systemd[1]: Started libcrun container. Dec 5 04:35:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063aa84c90e19261efcd79a83ca39435b018733231b35f5b2ac8d7439cb621c3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 5 04:35:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/063aa84c90e19261efcd79a83ca39435b018733231b35f5b2ac8d7439cb621c3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 04:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:35:00 localhost podman[215986]: 2025-12-05 09:35:00.383331583 +0000 UTC m=+0.129800230 container init 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:35:00 localhost multipathd[216000]: + sudo -E kolla_set_configs Dec 5 04:35:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:35:00 localhost podman[215986]: 2025-12-05 09:35:00.420973656 +0000 UTC m=+0.167442303 container start 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:35:00 localhost podman[215986]: multipathd Dec 5 04:35:00 localhost systemd[1]: Started multipathd container. Dec 5 04:35:00 localhost multipathd[216000]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:35:00 localhost multipathd[216000]: INFO:__main__:Validating config file Dec 5 04:35:00 localhost multipathd[216000]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:35:00 localhost multipathd[216000]: INFO:__main__:Writing out command to execute Dec 5 04:35:00 localhost multipathd[216000]: ++ cat /run_command Dec 5 04:35:00 localhost multipathd[216000]: + CMD='/usr/sbin/multipathd -d' Dec 5 04:35:00 localhost multipathd[216000]: + ARGS= Dec 5 04:35:00 localhost multipathd[216000]: + sudo kolla_copy_cacerts Dec 5 04:35:00 localhost multipathd[216000]: Running command: '/usr/sbin/multipathd -d' Dec 5 04:35:00 localhost multipathd[216000]: + [[ ! -n '' ]] Dec 5 04:35:00 localhost multipathd[216000]: + . kolla_extend_start Dec 5 04:35:00 localhost multipathd[216000]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 5 04:35:00 localhost multipathd[216000]: + umask 0022 Dec 5 04:35:00 localhost multipathd[216000]: + exec /usr/sbin/multipathd -d Dec 5 04:35:00 localhost multipathd[216000]: 10415.679214 | --------start up-------- Dec 5 04:35:00 localhost multipathd[216000]: 10415.679224 | read /etc/multipath.conf Dec 5 04:35:00 localhost multipathd[216000]: 10415.703209 | path checkers start up Dec 5 04:35:00 localhost podman[216008]: 2025-12-05 09:35:00.537772913 +0000 UTC m=+0.109011387 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd) Dec 5 04:35:00 localhost podman[216008]: 2025-12-05 09:35:00.543736798 +0000 UTC m=+0.114975312 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:35:00 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:35:01 localhost python3.9[216147]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:35:03 localhost python3.9[216257]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 04:35:03 localhost podman[216258]: 2025-12-05 09:35:03.195403733 +0000 UTC m=+0.084326376 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:35:03 localhost podman[216258]: 2025-12-05 09:35:03.258824102 +0000 UTC m=+0.147746755 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 04:35:03 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:35:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:35:03.879 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:35:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:35:03.880 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:35:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:35:03.881 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:35:03 localhost python3.9[216392]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 5 04:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36718 DF PROTO=TCP SPT=34922 DPT=9105 SEQ=1888687664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E56050000000001030307) Dec 5 04:35:04 localhost python3.9[216510]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:35:05 localhost python3.9[216598]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927304.2264528-1829-46850685057389/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51085 DF PROTO=TCP SPT=40194 DPT=9882 SEQ=2298738106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E5A450000000001030307) Dec 5 04:35:06 localhost python3.9[216708]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:06 localhost python3.9[216818]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:35:06 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 5 04:35:06 localhost systemd[1]: Stopped Load Kernel Modules. Dec 5 04:35:06 localhost systemd[1]: Stopping Load Kernel Modules... Dec 5 04:35:06 localhost systemd[1]: Starting Load Kernel Modules... Dec 5 04:35:06 localhost systemd-modules-load[216822]: Module 'msr' is built in Dec 5 04:35:06 localhost systemd[1]: Finished Load Kernel Modules. Dec 5 04:35:07 localhost python3.9[216932]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:35:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:35:11 localhost systemd[1]: tmp-crun.wO2iTB.mount: Deactivated successfully. Dec 5 04:35:11 localhost podman[216937]: 2025-12-05 09:35:11.218096629 +0000 UTC m=+0.107435670 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Dec 5 04:35:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32250 DF PROTO=TCP SPT=38914 DPT=9102 SEQ=3062971500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E71060000000001030307) Dec 5 04:35:11 localhost podman[216937]: 2025-12-05 09:35:11.249734105 +0000 UTC m=+0.139073146 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 04:35:11 localhost systemd[1]: Reloading. Dec 5 04:35:11 localhost systemd-rc-local-generator[216982]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:35:11 localhost systemd-sysv-generator[216986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:35:11 localhost systemd[1]: Reloading. Dec 5 04:35:11 localhost systemd-sysv-generator[217024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:35:11 localhost systemd-rc-local-generator[217019]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button) Dec 5 04:35:12 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 5 04:35:12 localhost lvm[217071]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 5 04:35:12 localhost lvm[217071]: VG ceph_vg0 finished Dec 5 04:35:12 localhost lvm[217070]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 5 04:35:12 localhost lvm[217070]: VG ceph_vg1 finished Dec 5 04:35:12 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 5 04:35:12 localhost systemd[1]: Starting man-db-cache-update.service... Dec 5 04:35:12 localhost systemd[1]: Reloading. Dec 5 04:35:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32251 DF PROTO=TCP SPT=38914 DPT=9102 SEQ=3062971500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E75050000000001030307) Dec 5 04:35:12 localhost systemd-sysv-generator[217125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:35:12 localhost systemd-rc-local-generator[217122]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:12 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 5 04:35:13 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 5 04:35:13 localhost systemd[1]: Finished man-db-cache-update.service. Dec 5 04:35:13 localhost systemd[1]: man-db-cache-update.service: Consumed 1.332s CPU time. Dec 5 04:35:13 localhost systemd[1]: run-r49e2edc0a6b74d49b99e68adcc6f834d.service: Deactivated successfully. Dec 5 04:35:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65444 DF PROTO=TCP SPT=39306 DPT=9100 SEQ=809065128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E7CE70000000001030307) Dec 5 04:35:14 localhost python3.9[218367]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:35:16 localhost python3.9[218481]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65446 DF PROTO=TCP SPT=39306 DPT=9100 SEQ=809065128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E89050000000001030307) Dec 5 04:35:17 localhost python3.9[218591]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:35:17 localhost systemd[1]: Reloading. Dec 5 04:35:17 localhost systemd-rc-local-generator[218614]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:35:17 localhost systemd-sysv-generator[218620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:18 localhost python3.9[218735]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:35:18 localhost network[218752]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:35:18 localhost network[218753]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:35:18 localhost network[218754]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:35:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:35:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36251 DF PROTO=TCP SPT=58556 DPT=9101 SEQ=1389279657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4E94C60000000001030307) Dec 5 04:35:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9326 DF PROTO=TCP SPT=49978 DPT=9101 SEQ=1531763053 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EA0450000000001030307) Dec 5 04:35:23 localhost python3.9[218989]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:24 localhost python3.9[219100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:25 localhost python3.9[219211]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:26 localhost python3.9[219322]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32254 DF PROTO=TCP SPT=38914 DPT=9102 SEQ=3062971500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EAC460000000001030307) Dec 5 04:35:28 localhost python3.9[219433]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:29 localhost python3.9[219544]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65448 DF PROTO=TCP SPT=39306 DPT=9100 SEQ=809065128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EB8450000000001030307) Dec 5 04:35:30 localhost python3.9[219655]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:35:30 localhost systemd[1]: tmp-crun.TNj5go.mount: Deactivated successfully. Dec 5 04:35:30 localhost podman[219657]: 2025-12-05 09:35:30.954527399 +0000 UTC m=+0.077475624 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:35:30 localhost podman[219657]: 2025-12-05 09:35:30.965101016 +0000 UTC m=+0.088049231 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:35:30 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:35:31 localhost python3.9[219785]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:35:33 localhost python3.9[219896]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:35:34 localhost podman[219968]: 2025-12-05 09:35:34.208272562 +0000 UTC m=+0.086327267 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller) Dec 5 04:35:34 localhost podman[219968]: 2025-12-05 09:35:34.275950003 +0000 UTC m=+0.154004688 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible) Dec 5 04:35:34 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21545 DF PROTO=TCP SPT=43492 DPT=9105 SEQ=2858305829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4ECB410000000001030307) Dec 5 04:35:34 localhost python3.9[220031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:35 localhost python3.9[220141]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:35 localhost python3.9[220251]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:36 localhost python3.9[220361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:37 localhost python3.9[220471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:37 localhost python3.9[220581]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:38 localhost python3.9[220691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:39 localhost python3.9[220801]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:39 localhost python3.9[220947]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:40 localhost python3.9[221114]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7704 DF PROTO=TCP SPT=54050 DPT=9102 SEQ=349467521 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EE6370000000001030307) Dec 5 04:35:41 localhost python3.9[221273]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:35:41 localhost systemd[1]: tmp-crun.ZuIwga.mount: Deactivated successfully. Dec 5 04:35:41 localhost podman[221384]: 2025-12-05 09:35:41.900448668 +0000 UTC m=+0.095065247 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Dec 5 04:35:41 localhost podman[221384]: 2025-12-05 09:35:41.910733536 +0000 UTC m=+0.105350125 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:35:41 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:35:42 localhost python3.9[221383]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7705 DF PROTO=TCP SPT=54050 DPT=9102 SEQ=349467521 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EEA460000000001030307) Dec 5 04:35:42 localhost python3.9[221512]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21546 DF PROTO=TCP SPT=43492 DPT=9105 SEQ=2858305829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EEC460000000001030307) Dec 5 04:35:43 localhost python3.9[221622]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:43 localhost python3.9[221732]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:35:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38492 DF PROTO=TCP SPT=53198 DPT=9100 SEQ=69991581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EF2170000000001030307) Dec 5 04:35:44 localhost python3.9[221842]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:45 localhost python3.9[221952]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:35:46 localhost python3.9[222062]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:35:46 localhost systemd[1]: Reloading. Dec 5 04:35:46 localhost systemd-sysv-generator[222090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:35:46 localhost systemd-rc-local-generator[222085]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:35:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38494 DF PROTO=TCP SPT=53198 DPT=9100 SEQ=69991581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4EFE050000000001030307) Dec 5 04:35:47 localhost python3.9[222207]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:47 localhost python3.9[222318]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:48 localhost python3.9[222429]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:49 localhost python3.9[222540]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:49 localhost python3.9[222651]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:50 localhost python3.9[222762]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32874 DF PROTO=TCP SPT=34826 DPT=9101 SEQ=4117106164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F0A050000000001030307) Dec 5 04:35:50 localhost python3.9[222873]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:51 localhost python3.9[222984]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:35:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50787 DF PROTO=TCP SPT=47934 DPT=9101 SEQ=3284089656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F16450000000001030307) Dec 5 04:35:55 localhost python3.9[223095]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:35:55 localhost python3.9[223205]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:35:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32876 DF PROTO=TCP SPT=34826 DPT=9101 SEQ=4117106164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F21C50000000001030307) Dec 5 04:35:56 localhost python3.9[223315]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:35:57 localhost python3.9[223425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:35:58 localhost python3.9[223535]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:35:59 localhost python3.9[223645]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:35:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38496 DF PROTO=TCP SPT=53198 DPT=9100 SEQ=69991581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F2E460000000001030307) Dec 5 04:35:59 localhost python3.9[223755]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:36:01 localhost podman[223827]: 2025-12-05 09:36:01.208651801 +0000 UTC m=+0.089007560 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 04:36:01 localhost podman[223827]: 2025-12-05 09:36:01.221840299 +0000 UTC m=+0.102196038 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:36:01 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:36:01 localhost python3.9[223882]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:02 localhost python3.9[223992]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:02 localhost python3.9[224102]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:36:03.881 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:36:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:36:03.881 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:36:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:36:03.883 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49596 DF PROTO=TCP SPT=51566 DPT=9105 SEQ=1043765986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F40460000000001030307) Dec 5 04:36:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:36:05 localhost systemd[1]: tmp-crun.dRKRqE.mount: Deactivated successfully. Dec 5 04:36:05 localhost podman[224120]: 2025-12-05 09:36:05.207504706 +0000 UTC m=+0.087030388 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible) Dec 5 04:36:05 localhost podman[224120]: 2025-12-05 09:36:05.249654572 +0000 UTC m=+0.129180204 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true) Dec 5 04:36:05 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:36:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15172 DF PROTO=TCP SPT=47706 DPT=9882 SEQ=347620908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F44470000000001030307) Dec 5 04:36:08 localhost python3.9[224238]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 5 04:36:09 localhost python3.9[224349]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 5 04:36:11 localhost python3.9[224465]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546419.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 5 04:36:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54123 DF PROTO=TCP SPT=33740 DPT=9102 SEQ=821836382 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F5B660000000001030307) Dec 5 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:36:12 localhost podman[224491]: 2025-12-05 09:36:12.197272232 +0000 UTC m=+0.082105886 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:36:12 localhost podman[224491]: 2025-12-05 09:36:12.226726389 +0000 UTC m=+0.111560013 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 5 04:36:12 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:36:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54124 DF PROTO=TCP SPT=33740 DPT=9102 SEQ=821836382 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F5F860000000001030307) Dec 5 04:36:12 localhost sshd[224509]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:36:12 localhost systemd-logind[760]: New session 54 of user zuul. Dec 5 04:36:12 localhost systemd[1]: Started Session 54 of User zuul. Dec 5 04:36:12 localhost systemd[1]: session-54.scope: Deactivated successfully. Dec 5 04:36:12 localhost systemd-logind[760]: Session 54 logged out. Waiting for processes to exit. Dec 5 04:36:12 localhost systemd-logind[760]: Removed session 54. Dec 5 04:36:13 localhost python3.9[224620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:13 localhost python3.9[224706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927372.949212-3388-66275355218953/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45508 DF PROTO=TCP SPT=53650 DPT=9100 SEQ=677905117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F67470000000001030307) Dec 5 04:36:14 localhost python3.9[224814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:14 localhost python3.9[224869]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:15 localhost python3.9[224977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:15 localhost python3.9[225063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927374.9052107-3388-36560821861230/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:16 localhost python3.9[225171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:16 localhost python3.9[225257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927375.916174-3388-169122359291829/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d097af55d0e0f04c3b1a46e6ef4206c1c28f58b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45510 DF PROTO=TCP SPT=53650 DPT=9100 SEQ=677905117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F73460000000001030307) Dec 5 04:36:17 localhost python3.9[225365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:17 localhost python3.9[225451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927376.9467442-3388-158671594084151/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:18 localhost python3.9[225559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:18 localhost python3.9[225645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927377.9821422-3388-90519033614101/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:19 localhost python3.9[225755]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:36:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53770 DF PROTO=TCP SPT=55438 DPT=9101 SEQ=3208807916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F7F450000000001030307) Dec 5 04:36:20 localhost python3.9[225865]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:36:22 localhost python3.9[225975]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:22 localhost python3.9[226087]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:36:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47102 DF PROTO=TCP SPT=34456 DPT=9882 SEQ=1993149639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F8C460000000001030307) Dec 5 04:36:24 localhost python3.9[226195]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:24 localhost python3.9[226305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:25 localhost python3.9[226391]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927384.4092894-3763-19918915051711/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:26 localhost python3.9[226499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:36:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53772 DF PROTO=TCP SPT=55438 DPT=9101 SEQ=3208807916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4F97050000000001030307) Dec 5 04:36:26 localhost python3.9[226585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927385.7353678-3808-7297217029924/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:36:27 localhost python3.9[226695]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 5 04:36:28 localhost python3.9[226805]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:36:29 localhost python3[226915]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:36:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45512 DF PROTO=TCP SPT=53650 DPT=9100 SEQ=677905117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FA4450000000001030307) Dec 5 04:36:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:36:32 localhost systemd[1]: tmp-crun.dDuZGK.mount: Deactivated successfully. Dec 5 04:36:32 localhost podman[226941]: 2025-12-05 09:36:32.183818475 +0000 UTC m=+0.077098113 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 04:36:32 localhost podman[226941]: 2025-12-05 09:36:32.224471255 +0000 UTC m=+0.117750873 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 5 04:36:32 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:36:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44955 DF PROTO=TCP SPT=45590 DPT=9105 SEQ=4165837205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FB5850000000001030307) Dec 5 04:36:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61347 DF PROTO=TCP SPT=48822 DPT=9882 SEQ=2285468733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FBA450000000001030307) Dec 5 04:36:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:36:39 localhost podman[226984]: 2025-12-05 09:36:39.251911911 +0000 UTC m=+3.715617538 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:36:39 localhost podman[226984]: 2025-12-05 09:36:39.321611275 +0000 UTC m=+3.785316902 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:36:39 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:36:39 localhost podman[226928]: 2025-12-05 09:36:29.422030476 +0000 UTC m=+0.048079390 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 5 04:36:39 localhost podman[227030]: Dec 5 04:36:39 localhost podman[227030]: 2025-12-05 09:36:39.576564117 +0000 UTC m=+0.077590828 container create cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Dec 5 04:36:39 localhost podman[227030]: 2025-12-05 09:36:39.531929144 +0000 UTC m=+0.032955905 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 5 04:36:39 localhost python3[226915]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Dec 5 04:36:41 localhost python3.9[227177]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49067 DF PROTO=TCP SPT=54522 DPT=9102 SEQ=2372512713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FD0960000000001030307) Dec 5 04:36:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49068 DF PROTO=TCP SPT=54522 DPT=9102 SEQ=2372512713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FD4850000000001030307) Dec 5 04:36:42 localhost python3.9[227357]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 5 04:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:36:42 localhost systemd[1]: tmp-crun.2TrM8r.mount: Deactivated successfully. Dec 5 04:36:42 localhost podman[227393]: 2025-12-05 09:36:42.617210303 +0000 UTC m=+0.087567934 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Dec 5 04:36:42 localhost podman[227393]: 2025-12-05 09:36:42.628567072 +0000 UTC m=+0.098924713 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:36:42 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:36:43 localhost python3.9[227504]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:36:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46255 DF PROTO=TCP SPT=40266 DPT=9100 SEQ=789373680 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FDC770000000001030307) Dec 5 04:36:44 localhost python3[227614]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:36:44 localhost python3[227614]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 5 04:36:44 localhost podman[227667]: 2025-12-05 09:36:44.717555707 +0000 UTC m=+0.089870905 container remove 34a5cf22cb02e77e8dca42175a54c3cefd9a6adab4066bac1d7ddbafa3ee3a6c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'c48ee961a201e2ecc5561337e7450232-f3fe7c52055154c7f97b988e301af0d7'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 5 04:36:44 localhost python3[227614]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Dec 5 04:36:44 localhost podman[227681]: Dec 5 04:36:44 localhost podman[227681]: 2025-12-05 09:36:44.824036163 +0000 UTC m=+0.086853693 container create ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=edpm) Dec 5 04:36:44 localhost podman[227681]: 2025-12-05 09:36:44.783750224 +0000 UTC m=+0.046567804 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 5 04:36:44 localhost python3[227614]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Dec 5 04:36:45 localhost python3.9[227826]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:46 localhost python3.9[227938]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:36:47 localhost python3.9[228047]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927406.6089134-4084-49738493619718/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:36:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46257 DF PROTO=TCP SPT=40266 DPT=9100 SEQ=789373680 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FE8850000000001030307) Dec 5 04:36:48 localhost python3.9[228102]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:36:48 localhost systemd[1]: Reloading. Dec 5 04:36:48 localhost systemd-sysv-generator[228130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:36:48 localhost systemd-rc-local-generator[228124]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:49 localhost python3.9[228192]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:36:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60668 DF PROTO=TCP SPT=50190 DPT=9101 SEQ=2778537623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC4FF4850000000001030307) Dec 5 04:36:51 localhost systemd[1]: Reloading. Dec 5 04:36:51 localhost systemd-rc-local-generator[228218]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:36:51 localhost systemd-sysv-generator[228223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:36:51 localhost systemd[1]: Starting nova_compute container... Dec 5 04:36:51 localhost systemd[1]: Started libcrun container. Dec 5 04:36:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 5 04:36:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 5 04:36:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 04:36:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 04:36:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 04:36:51 localhost podman[228233]: 2025-12-05 09:36:51.646307207 +0000 UTC m=+0.122771147 container init ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0) Dec 5 04:36:51 localhost podman[228233]: 2025-12-05 09:36:51.657481351 +0000 UTC m=+0.133945311 container start ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, managed_by=edpm_ansible, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:36:51 localhost podman[228233]: nova_compute Dec 5 04:36:51 localhost nova_compute[228248]: + sudo -E kolla_set_configs Dec 5 04:36:51 localhost systemd[1]: Started nova_compute container. Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Validating config file Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying service configuration files Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Deleting /etc/ceph Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Creating directory /etc/ceph Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/ceph Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Writing out command to execute Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:36:51 localhost nova_compute[228248]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 5 04:36:51 localhost nova_compute[228248]: ++ cat /run_command Dec 5 04:36:51 localhost nova_compute[228248]: + CMD=nova-compute Dec 5 04:36:51 localhost nova_compute[228248]: + ARGS= Dec 5 04:36:51 localhost nova_compute[228248]: + sudo kolla_copy_cacerts Dec 5 04:36:51 localhost nova_compute[228248]: + [[ ! -n '' ]] Dec 5 04:36:51 localhost nova_compute[228248]: + . kolla_extend_start Dec 5 04:36:51 localhost nova_compute[228248]: Running command: 'nova-compute' Dec 5 04:36:51 localhost nova_compute[228248]: + echo 'Running command: '\''nova-compute'\''' Dec 5 04:36:51 localhost nova_compute[228248]: + umask 0022 Dec 5 04:36:51 localhost nova_compute[228248]: + exec nova-compute Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.370 228252 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.370 228252 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.370 228252 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.371 228252 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 5 04:36:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32879 DF PROTO=TCP SPT=34826 DPT=9101 SEQ=4117106164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5000450000000001030307) Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.479 228252 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.500 228252 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.500 228252 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 5 04:36:53 localhost nova_compute[228248]: 2025-12-05 09:36:53.890 228252 INFO nova.virt.driver [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.003 228252 INFO nova.compute.provider_config [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.010 228252 WARNING nova.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.010 228252 DEBUG oslo_concurrency.lockutils [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.011 228252 DEBUG oslo_concurrency.lockutils [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.011 228252 DEBUG oslo_concurrency.lockutils [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.011 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.011 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.011 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.012 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.013 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] console_host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.014 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.015 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.016 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.017 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.018 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.019 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.020 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.021 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.021 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.021 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.021 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.021 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.021 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.022 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.023 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.023 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.023 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.023 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.023 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.023 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.024 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.025 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.026 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.027 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.028 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.029 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.030 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.031 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.032 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.032 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.032 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.032 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.032 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.032 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.033 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.033 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.033 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.033 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.033 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.033 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.034 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.035 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.036 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.037 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.038 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.039 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.040 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.041 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.042 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.043 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.044 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.045 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.046 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.047 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.048 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.049 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.050 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.051 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.051 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.051 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.051 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.051 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.051 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.052 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.053 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.054 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.055 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.055 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.055 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.055 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.055 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.055 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.056 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.056 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.056 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.056 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.056 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.056 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.057 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.057 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.057 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.057 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.057 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.057 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.058 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.058 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.058 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.058 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.058 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.058 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.059 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.059 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.059 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.059 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.059 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.059 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.060 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.061 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.062 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.062 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.062 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.062 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.062 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.062 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.063 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.064 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.065 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.066 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.067 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.068 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.069 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.070 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.071 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.072 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.073 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.074 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.075 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.076 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.077 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.078 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.079 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.080 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.081 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.082 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.082 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.082 228252 WARNING oslo_config.cfg [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 5 04:36:54 localhost nova_compute[228248]: live_migration_uri is deprecated for removal in favor of two other options that Dec 5 04:36:54 localhost nova_compute[228248]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 5 04:36:54 localhost nova_compute[228248]: and ``live_migration_inbound_addr`` respectively. Dec 5 04:36:54 localhost nova_compute[228248]: ). Its value may be silently ignored in the future.#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.082 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.082 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.082 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.083 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.084 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rbd_secret_uuid = 79feddb1-4bfc-557f-83b9-0d57c9f66c1b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.085 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.086 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.087 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.088 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.089 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.089 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.089 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.089 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.089 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.089 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.090 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.091 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.091 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.091 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.091 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.091 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.091 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.092 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.093 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.093 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.093 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.093 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.093 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.093 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.094 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.094 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.094 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.094 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.094 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.094 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.095 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.095 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.095 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.095 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.095 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.095 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.096 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.097 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.098 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.098 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.098 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.098 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.098 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.098 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.099 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.100 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.101 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.101 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.101 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.101 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.101 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.101 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.102 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.103 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.103 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.103 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.103 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.103 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.103 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.104 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.105 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.106 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.107 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.108 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.109 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.109 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.109 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.109 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.109 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.109 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.110 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.110 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.110 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.110 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.111 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.111 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.111 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.111 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.111 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.111 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.112 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.112 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.112 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.112 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.112 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.112 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.113 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.114 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.115 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.116 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.117 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.118 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.119 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.119 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.119 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.119 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.119 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.119 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.120 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.120 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.120 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.120 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.120 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.120 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.121 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.122 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.123 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.123 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.123 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.123 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.123 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.123 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.124 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.125 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.126 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.127 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.128 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.129 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.130 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.131 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.132 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.133 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.134 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.135 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.136 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.137 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.138 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.139 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.140 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.141 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.142 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.143 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.144 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.144 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.144 228252 DEBUG oslo_service.service [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.145 228252 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.223 228252 INFO nova.virt.node [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Determined node identity 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from /var/lib/nova/compute_id#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.224 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.225 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.225 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.225 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.250 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.253 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.254 228252 INFO nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.268 228252 DEBUG nova.virt.libvirt.volume.mount [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.278 228252 INFO nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Libvirt host capabilities Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 2b745fc2-5ba6-425d-9fc8-d9117ea29cbc Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: x86_64 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v4 Dec 5 04:36:54 localhost nova_compute[228248]: AMD Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tcp Dec 5 04:36:54 localhost nova_compute[228248]: rdma Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 16116612 Dec 5 04:36:54 localhost nova_compute[228248]: 4029153 Dec 5 04:36:54 localhost nova_compute[228248]: 0 Dec 5 04:36:54 localhost nova_compute[228248]: 0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: selinux Dec 5 04:36:54 localhost nova_compute[228248]: 0 Dec 5 04:36:54 localhost nova_compute[228248]: system_u:system_r:svirt_t:s0 Dec 5 04:36:54 localhost nova_compute[228248]: system_u:system_r:svirt_tcg_t:s0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: dac Dec 5 04:36:54 localhost nova_compute[228248]: 0 Dec 5 04:36:54 localhost nova_compute[228248]: +107:+107 Dec 5 04:36:54 localhost nova_compute[228248]: +107:+107 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: hvm Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 32 Dec 5 04:36:54 localhost nova_compute[228248]: /usr/libexec/qemu-kvm Dec 5 04:36:54 localhost nova_compute[228248]: pc-i440fx-rhel7.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.8.0 Dec 5 04:36:54 localhost nova_compute[228248]: q35 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.4.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.5.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.3.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel7.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.4.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.2.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.2.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.0.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.0.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.1.0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: hvm Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 64 Dec 5 04:36:54 localhost nova_compute[228248]: /usr/libexec/qemu-kvm Dec 5 04:36:54 localhost nova_compute[228248]: pc-i440fx-rhel7.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.8.0 Dec 5 04:36:54 localhost nova_compute[228248]: q35 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.4.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.5.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.3.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel7.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.4.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.2.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.2.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.0.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.0.0 Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel8.1.0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: #033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.290 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.310 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/libexec/qemu-kvm Dec 5 04:36:54 localhost nova_compute[228248]: kvm Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.8.0 Dec 5 04:36:54 localhost nova_compute[228248]: i686 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: rom Dec 5 04:36:54 localhost nova_compute[228248]: pflash Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: yes Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: AMD Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 486 Dec 5 04:36:54 localhost nova_compute[228248]: 486-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Conroe Dec 5 04:36:54 localhost nova_compute[228248]: Conroe-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-IBPB Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v4 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v1 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v2 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v6 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v7 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Penryn Dec 5 04:36:54 localhost nova_compute[228248]: Penryn-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Westmere Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v2 Dec 5 04:36:54 localhost nova_compute[228248]: athlon Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: athlon-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: kvm32 Dec 5 04:36:54 localhost nova_compute[228248]: kvm32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: n270 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: n270-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pentium Dec 5 04:36:54 localhost nova_compute[228248]: pentium-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: phenom Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: phenom-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu32 Dec 5 04:36:54 localhost nova_compute[228248]: qemu32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: anonymous Dec 5 04:36:54 localhost nova_compute[228248]: memfd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: disk Dec 5 04:36:54 localhost nova_compute[228248]: cdrom Dec 5 04:36:54 localhost nova_compute[228248]: floppy Dec 5 04:36:54 localhost nova_compute[228248]: lun Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: fdc Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: sata Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: vnc Dec 5 04:36:54 localhost nova_compute[228248]: egl-headless Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: subsystem Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: mandatory Dec 5 04:36:54 localhost nova_compute[228248]: requisite Dec 5 04:36:54 localhost nova_compute[228248]: optional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: pci Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: random Dec 5 04:36:54 localhost nova_compute[228248]: egd Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: path Dec 5 04:36:54 localhost nova_compute[228248]: handle Dec 5 04:36:54 localhost nova_compute[228248]: virtiofs Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tpm-tis Dec 5 04:36:54 localhost nova_compute[228248]: tpm-crb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: emulator Dec 5 04:36:54 localhost nova_compute[228248]: external Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 2.0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: passt Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: isa Dec 5 04:36:54 localhost nova_compute[228248]: hyperv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: null Dec 5 04:36:54 localhost nova_compute[228248]: vc Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: dev Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: pipe Dec 5 04:36:54 localhost nova_compute[228248]: stdio Dec 5 04:36:54 localhost nova_compute[228248]: udp Dec 5 04:36:54 localhost nova_compute[228248]: tcp Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: qemu-vdagent Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: relaxed Dec 5 04:36:54 localhost nova_compute[228248]: vapic Dec 5 04:36:54 localhost nova_compute[228248]: spinlocks Dec 5 04:36:54 localhost nova_compute[228248]: vpindex Dec 5 04:36:54 localhost nova_compute[228248]: runtime Dec 5 04:36:54 localhost nova_compute[228248]: synic Dec 5 04:36:54 localhost nova_compute[228248]: stimer Dec 5 04:36:54 localhost nova_compute[228248]: reset Dec 5 04:36:54 localhost nova_compute[228248]: vendor_id Dec 5 04:36:54 localhost nova_compute[228248]: frequencies Dec 5 04:36:54 localhost nova_compute[228248]: reenlightenment Dec 5 04:36:54 localhost nova_compute[228248]: tlbflush Dec 5 04:36:54 localhost nova_compute[228248]: ipi Dec 5 04:36:54 localhost nova_compute[228248]: avic Dec 5 04:36:54 localhost nova_compute[228248]: emsr_bitmap Dec 5 04:36:54 localhost nova_compute[228248]: xmm_input Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 4095 Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Linux KVM Hv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tdx Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.315 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/libexec/qemu-kvm Dec 5 04:36:54 localhost nova_compute[228248]: kvm Dec 5 04:36:54 localhost nova_compute[228248]: pc-i440fx-rhel7.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: i686 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: rom Dec 5 04:36:54 localhost nova_compute[228248]: pflash Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: yes Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: AMD Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 486 Dec 5 04:36:54 localhost nova_compute[228248]: 486-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Conroe Dec 5 04:36:54 localhost nova_compute[228248]: Conroe-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-IBPB Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v4 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v1 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v2 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v6 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v7 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Penryn Dec 5 04:36:54 localhost nova_compute[228248]: Penryn-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Westmere Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v2 Dec 5 04:36:54 localhost nova_compute[228248]: athlon Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: athlon-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: kvm32 Dec 5 04:36:54 localhost nova_compute[228248]: kvm32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: n270 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: n270-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pentium Dec 5 04:36:54 localhost nova_compute[228248]: pentium-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: phenom Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: phenom-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu32 Dec 5 04:36:54 localhost nova_compute[228248]: qemu32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: anonymous Dec 5 04:36:54 localhost nova_compute[228248]: memfd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: disk Dec 5 04:36:54 localhost nova_compute[228248]: cdrom Dec 5 04:36:54 localhost nova_compute[228248]: floppy Dec 5 04:36:54 localhost nova_compute[228248]: lun Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: ide Dec 5 04:36:54 localhost nova_compute[228248]: fdc Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: sata Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: vnc Dec 5 04:36:54 localhost nova_compute[228248]: egl-headless Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: subsystem Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: mandatory Dec 5 04:36:54 localhost nova_compute[228248]: requisite Dec 5 04:36:54 localhost nova_compute[228248]: optional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: pci Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: random Dec 5 04:36:54 localhost nova_compute[228248]: egd Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: path Dec 5 04:36:54 localhost nova_compute[228248]: handle Dec 5 04:36:54 localhost nova_compute[228248]: virtiofs Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tpm-tis Dec 5 04:36:54 localhost nova_compute[228248]: tpm-crb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: emulator Dec 5 04:36:54 localhost nova_compute[228248]: external Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 2.0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: passt Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: isa Dec 5 04:36:54 localhost nova_compute[228248]: hyperv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: null Dec 5 04:36:54 localhost nova_compute[228248]: vc Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: dev Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: pipe Dec 5 04:36:54 localhost nova_compute[228248]: stdio Dec 5 04:36:54 localhost nova_compute[228248]: udp Dec 5 04:36:54 localhost nova_compute[228248]: tcp Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: qemu-vdagent Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: relaxed Dec 5 04:36:54 localhost nova_compute[228248]: vapic Dec 5 04:36:54 localhost nova_compute[228248]: spinlocks Dec 5 04:36:54 localhost nova_compute[228248]: vpindex Dec 5 04:36:54 localhost nova_compute[228248]: runtime Dec 5 04:36:54 localhost nova_compute[228248]: synic Dec 5 04:36:54 localhost nova_compute[228248]: stimer Dec 5 04:36:54 localhost nova_compute[228248]: reset Dec 5 04:36:54 localhost nova_compute[228248]: vendor_id Dec 5 04:36:54 localhost nova_compute[228248]: frequencies Dec 5 04:36:54 localhost nova_compute[228248]: reenlightenment Dec 5 04:36:54 localhost nova_compute[228248]: tlbflush Dec 5 04:36:54 localhost nova_compute[228248]: ipi Dec 5 04:36:54 localhost nova_compute[228248]: avic Dec 5 04:36:54 localhost nova_compute[228248]: emsr_bitmap Dec 5 04:36:54 localhost nova_compute[228248]: xmm_input Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 4095 Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Linux KVM Hv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tdx Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.355 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.361 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/libexec/qemu-kvm Dec 5 04:36:54 localhost nova_compute[228248]: kvm Dec 5 04:36:54 localhost nova_compute[228248]: pc-q35-rhel9.8.0 Dec 5 04:36:54 localhost nova_compute[228248]: x86_64 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: efi Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: rom Dec 5 04:36:54 localhost nova_compute[228248]: pflash Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: yes Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: yes Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: AMD Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 486 Dec 5 04:36:54 localhost nova_compute[228248]: 486-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Conroe Dec 5 04:36:54 localhost nova_compute[228248]: Conroe-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-IBPB Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v4 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v1 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v2 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v6 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v7 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Penryn Dec 5 04:36:54 localhost nova_compute[228248]: Penryn-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Westmere Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v2 Dec 5 04:36:54 localhost nova_compute[228248]: athlon Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: athlon-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: kvm32 Dec 5 04:36:54 localhost nova_compute[228248]: kvm32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: n270 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: n270-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pentium Dec 5 04:36:54 localhost nova_compute[228248]: pentium-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: phenom Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: phenom-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu32 Dec 5 04:36:54 localhost nova_compute[228248]: qemu32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: anonymous Dec 5 04:36:54 localhost nova_compute[228248]: memfd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: disk Dec 5 04:36:54 localhost nova_compute[228248]: cdrom Dec 5 04:36:54 localhost nova_compute[228248]: floppy Dec 5 04:36:54 localhost nova_compute[228248]: lun Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: fdc Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: sata Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: vnc Dec 5 04:36:54 localhost nova_compute[228248]: egl-headless Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: subsystem Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: mandatory Dec 5 04:36:54 localhost nova_compute[228248]: requisite Dec 5 04:36:54 localhost nova_compute[228248]: optional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: pci Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: random Dec 5 04:36:54 localhost nova_compute[228248]: egd Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: path Dec 5 04:36:54 localhost nova_compute[228248]: handle Dec 5 04:36:54 localhost nova_compute[228248]: virtiofs Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tpm-tis Dec 5 04:36:54 localhost nova_compute[228248]: tpm-crb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: emulator Dec 5 04:36:54 localhost nova_compute[228248]: external Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 2.0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: passt Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: isa Dec 5 04:36:54 localhost nova_compute[228248]: hyperv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: null Dec 5 04:36:54 localhost nova_compute[228248]: vc Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: dev Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: pipe Dec 5 04:36:54 localhost nova_compute[228248]: stdio Dec 5 04:36:54 localhost nova_compute[228248]: udp Dec 5 04:36:54 localhost nova_compute[228248]: tcp Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: qemu-vdagent Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: relaxed Dec 5 04:36:54 localhost nova_compute[228248]: vapic Dec 5 04:36:54 localhost nova_compute[228248]: spinlocks Dec 5 04:36:54 localhost nova_compute[228248]: vpindex Dec 5 04:36:54 localhost nova_compute[228248]: runtime Dec 5 04:36:54 localhost nova_compute[228248]: synic Dec 5 04:36:54 localhost nova_compute[228248]: stimer Dec 5 04:36:54 localhost nova_compute[228248]: reset Dec 5 04:36:54 localhost nova_compute[228248]: vendor_id Dec 5 04:36:54 localhost nova_compute[228248]: frequencies Dec 5 04:36:54 localhost nova_compute[228248]: reenlightenment Dec 5 04:36:54 localhost nova_compute[228248]: tlbflush Dec 5 04:36:54 localhost nova_compute[228248]: ipi Dec 5 04:36:54 localhost nova_compute[228248]: avic Dec 5 04:36:54 localhost nova_compute[228248]: emsr_bitmap Dec 5 04:36:54 localhost nova_compute[228248]: xmm_input Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 4095 Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Linux KVM Hv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tdx Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.413 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/libexec/qemu-kvm Dec 5 04:36:54 localhost nova_compute[228248]: kvm Dec 5 04:36:54 localhost nova_compute[228248]: pc-i440fx-rhel7.6.0 Dec 5 04:36:54 localhost nova_compute[228248]: x86_64 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: rom Dec 5 04:36:54 localhost nova_compute[228248]: pflash Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: yes Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: no Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: AMD Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 486 Dec 5 04:36:54 localhost nova_compute[228248]: 486-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Broadwell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cascadelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Conroe Dec 5 04:36:54 localhost nova_compute[228248]: Conroe-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Cooperlake-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Denverton-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dhyana-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Genoa-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-IBPB Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Milan-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-Rome-v4 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v1 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v2 Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: EPYC-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: GraniteRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Haswell-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-noTSX Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v6 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Icelake-Server-v7 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: IvyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: KnightsMill-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Nehalem-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G1-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G4-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Opteron_G5-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Penryn Dec 5 04:36:54 localhost nova_compute[228248]: Penryn-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: SandyBridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SapphireRapids-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: SierraForest-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Client-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-noTSX-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Skylake-Server-v5 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v2 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v3 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Snowridge-v4 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Westmere Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-IBRS Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Westmere-v2 Dec 5 04:36:54 localhost nova_compute[228248]: athlon Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: athlon-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: core2duo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: coreduo-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: kvm32 Dec 5 04:36:54 localhost nova_compute[228248]: kvm32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64 Dec 5 04:36:54 localhost nova_compute[228248]: kvm64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: n270 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: n270-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pentium Dec 5 04:36:54 localhost nova_compute[228248]: pentium-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2 Dec 5 04:36:54 localhost nova_compute[228248]: pentium2-v1 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3 Dec 5 04:36:54 localhost nova_compute[228248]: pentium3-v1 Dec 5 04:36:54 localhost nova_compute[228248]: phenom Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: phenom-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu32 Dec 5 04:36:54 localhost nova_compute[228248]: qemu32-v1 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64 Dec 5 04:36:54 localhost nova_compute[228248]: qemu64-v1 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: anonymous Dec 5 04:36:54 localhost nova_compute[228248]: memfd Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: disk Dec 5 04:36:54 localhost nova_compute[228248]: cdrom Dec 5 04:36:54 localhost nova_compute[228248]: floppy Dec 5 04:36:54 localhost nova_compute[228248]: lun Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: ide Dec 5 04:36:54 localhost nova_compute[228248]: fdc Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: sata Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: vnc Dec 5 04:36:54 localhost nova_compute[228248]: egl-headless Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: subsystem Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: mandatory Dec 5 04:36:54 localhost nova_compute[228248]: requisite Dec 5 04:36:54 localhost nova_compute[228248]: optional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: pci Dec 5 04:36:54 localhost nova_compute[228248]: scsi Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: virtio Dec 5 04:36:54 localhost nova_compute[228248]: virtio-transitional Dec 5 04:36:54 localhost nova_compute[228248]: virtio-non-transitional Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: random Dec 5 04:36:54 localhost nova_compute[228248]: egd Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: path Dec 5 04:36:54 localhost nova_compute[228248]: handle Dec 5 04:36:54 localhost nova_compute[228248]: virtiofs Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tpm-tis Dec 5 04:36:54 localhost nova_compute[228248]: tpm-crb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: emulator Dec 5 04:36:54 localhost nova_compute[228248]: external Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 2.0 Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: usb Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: qemu Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: builtin Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: default Dec 5 04:36:54 localhost nova_compute[228248]: passt Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: isa Dec 5 04:36:54 localhost nova_compute[228248]: hyperv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: null Dec 5 04:36:54 localhost nova_compute[228248]: vc Dec 5 04:36:54 localhost nova_compute[228248]: pty Dec 5 04:36:54 localhost nova_compute[228248]: dev Dec 5 04:36:54 localhost nova_compute[228248]: file Dec 5 04:36:54 localhost nova_compute[228248]: pipe Dec 5 04:36:54 localhost nova_compute[228248]: stdio Dec 5 04:36:54 localhost nova_compute[228248]: udp Dec 5 04:36:54 localhost nova_compute[228248]: tcp Dec 5 04:36:54 localhost nova_compute[228248]: unix Dec 5 04:36:54 localhost nova_compute[228248]: qemu-vdagent Dec 5 04:36:54 localhost nova_compute[228248]: dbus Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: relaxed Dec 5 04:36:54 localhost nova_compute[228248]: vapic Dec 5 04:36:54 localhost nova_compute[228248]: spinlocks Dec 5 04:36:54 localhost nova_compute[228248]: vpindex Dec 5 04:36:54 localhost nova_compute[228248]: runtime Dec 5 04:36:54 localhost nova_compute[228248]: synic Dec 5 04:36:54 localhost nova_compute[228248]: stimer Dec 5 04:36:54 localhost nova_compute[228248]: reset Dec 5 04:36:54 localhost nova_compute[228248]: vendor_id Dec 5 04:36:54 localhost nova_compute[228248]: frequencies Dec 5 04:36:54 localhost nova_compute[228248]: reenlightenment Dec 5 04:36:54 localhost nova_compute[228248]: tlbflush Dec 5 04:36:54 localhost nova_compute[228248]: ipi Dec 5 04:36:54 localhost nova_compute[228248]: avic Dec 5 04:36:54 localhost nova_compute[228248]: emsr_bitmap Dec 5 04:36:54 localhost nova_compute[228248]: xmm_input Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: 4095 Dec 5 04:36:54 localhost nova_compute[228248]: on Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: off Dec 5 04:36:54 localhost nova_compute[228248]: Linux KVM Hv Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: tdx Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: Dec 5 04:36:54 localhost nova_compute[228248]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.467 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.468 228252 INFO nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Secure Boot support detected#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.470 228252 INFO nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.470 228252 INFO nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.482 228252 DEBUG nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.511 228252 INFO nova.virt.node [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Determined node identity 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from /var/lib/nova/compute_id#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.526 228252 DEBUG nova.compute.manager [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Verified node 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 matches my host np0005546419.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.561 228252 DEBUG nova.compute.manager [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.564 228252 DEBUG nova.virt.libvirt.vif [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T08:35:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005546419.localdomain',hostname='test',id=2,image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T08:35:39Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005546419.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e6ca8a92050741d3a93772e6c1b0d704',ramdisk_id='',reservation_id='r-99d0dddi',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-05T08:35:39Z,user_data=None,user_id='52d0a54dc45b4c4caaba721ba3202150',uuid=96a47a1c-57c7-4bb1-aecc-33db976db8c7,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.565 228252 DEBUG nova.network.os_vif_util [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Converting VIF {"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.565 228252 DEBUG nova.network.os_vif_util [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.566 228252 DEBUG os_vif [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.635 228252 DEBUG ovsdbapp.backend.ovs_idl [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.635 228252 DEBUG ovsdbapp.backend.ovs_idl [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.635 228252 DEBUG ovsdbapp.backend.ovs_idl [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.636 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.636 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.636 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.637 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.638 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.641 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.651 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.651 228252 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.652 228252 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:36:54 localhost nova_compute[228248]: 2025-12-05 09:36:54.652 228252 INFO oslo.privsep.daemon [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp837og3wg/privsep.sock']#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.264 228252 INFO oslo.privsep.daemon [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.159 228657 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.164 228657 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.167 228657 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.167 228657 INFO oslo.privsep.daemon [-] privsep daemon running as pid 228657#033[00m Dec 5 04:36:55 localhost python3.9[228384]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.550 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.551 228252 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2f95d81-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.551 228252 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2f95d81-23, col_values=(('external_ids', {'iface-id': 'c2f95d81-2317-46b9-8146-596eac8f9acb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:e6:3a', 'vm-uuid': '96a47a1c-57c7-4bb1-aecc-33db976db8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.553 228252 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.553 228252 INFO os_vif [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23')#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.554 228252 DEBUG nova.compute.manager [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.558 228252 DEBUG nova.compute.manager [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.559 228252 INFO nova.compute.manager [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.616 228252 INFO nova.service [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating service version for nova-compute on np0005546419.localdomain from 57 to 66#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.617 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.650 228252 DEBUG oslo_concurrency.lockutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.651 228252 DEBUG oslo_concurrency.lockutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.651 228252 DEBUG oslo_concurrency.lockutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.651 228252 DEBUG nova.compute.resource_tracker [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:36:55 localhost nova_compute[228248]: 2025-12-05 09:36:55.652 228252 DEBUG oslo_concurrency.processutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.084 228252 DEBUG oslo_concurrency.processutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.172 228252 DEBUG nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.173 228252 DEBUG nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:36:56 localhost systemd[1]: Started libvirt nodedev daemon. Dec 5 04:36:56 localhost python3.9[228790]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49071 DF PROTO=TCP SPT=54522 DPT=9102 SEQ=2372512713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC500C450000000001030307) Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.536 228252 WARNING nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.538 228252 DEBUG nova.compute.resource_tracker [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12929MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.539 228252 DEBUG oslo_concurrency.lockutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.539 228252 DEBUG oslo_concurrency.lockutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.651 228252 DEBUG nova.compute.resource_tracker [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.651 228252 DEBUG nova.compute.resource_tracker [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.652 228252 DEBUG nova.compute.resource_tracker [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.699 228252 DEBUG nova.scheduler.client.report [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.753 228252 DEBUG nova.scheduler.client.report [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.753 228252 DEBUG nova.compute.provider_tree [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.776 228252 DEBUG nova.scheduler.client.report [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.797 228252 DEBUG nova.scheduler.client.report [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: HW_CPU_X86_SSE41,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,HW_CPU_X86_SSSE3,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:36:56 localhost nova_compute[228248]: 2025-12-05 09:36:56.832 228252 DEBUG oslo_concurrency.processutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.263 228252 DEBUG oslo_concurrency.processutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.270 228252 DEBUG nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 5 04:36:57 localhost nova_compute[228248]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.270 228252 INFO nova.virt.libvirt.host [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.272 228252 DEBUG nova.compute.provider_tree [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.273 228252 DEBUG nova.virt.libvirt.driver [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.338 228252 DEBUG nova.scheduler.client.report [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updated inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.339 228252 DEBUG nova.compute.provider_tree [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.339 228252 DEBUG nova.compute.provider_tree [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.419 228252 DEBUG nova.compute.provider_tree [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Updating resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 generation from 4 to 5 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.447 228252 DEBUG nova.compute.resource_tracker [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.448 228252 DEBUG oslo_concurrency.lockutils [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.448 228252 DEBUG nova.service [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.491 228252 DEBUG nova.service [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 5 04:36:57 localhost nova_compute[228248]: 2025-12-05 09:36:57.491 228252 DEBUG nova.servicegroup.drivers.db [None req-c31961f1-3d9b-46f6-afc8-72491dbbafd1 - - - - - -] DB_Driver: join new ServiceGroup member np0005546419.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 5 04:36:58 localhost python3.9[228943]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:36:59 localhost python3.9[229053]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 5 04:36:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46259 DF PROTO=TCP SPT=40266 DPT=9100 SEQ=789373680 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5018450000000001030307) Dec 5 04:36:59 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 121.3 (404 of 333 items), suggesting rotation. Dec 5 04:36:59 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 04:36:59 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:36:59 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:36:59 localhost nova_compute[228248]: 2025-12-05 09:36:59.640 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:00 localhost python3.9[229186]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:37:00 localhost systemd[1]: Stopping nova_compute container... Dec 5 04:37:00 localhost nova_compute[228248]: 2025-12-05 09:37:00.568 228252 DEBUG oslo_privsep.comm [-] EOF on privsep read channel _reader_main /usr/lib/python3.9/site-packages/oslo_privsep/comm.py:170#033[00m Dec 5 04:37:00 localhost nova_compute[228248]: 2025-12-05 09:37:00.620 228252 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.493 228252 DEBUG oslo_service.periodic_task [None req-8a42172f-e6bf-4663-98c3-4f6e406b9b6d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.514 228252 DEBUG nova.compute.manager [None req-8a42172f-e6bf-4663-98c3-4f6e406b9b6d - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.515 228252 DEBUG oslo_concurrency.lockutils [None req-8a42172f-e6bf-4663-98c3-4f6e406b9b6d - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.515 228252 DEBUG oslo_concurrency.lockutils [None req-8a42172f-e6bf-4663-98c3-4f6e406b9b6d - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.516 228252 DEBUG oslo_service.periodic_task [None req-8a42172f-e6bf-4663-98c3-4f6e406b9b6d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.574 228252 DEBUG oslo_concurrency.lockutils [None req-8a42172f-e6bf-4663-98c3-4f6e406b9b6d - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.608 228252 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.611 228252 DEBUG oslo_concurrency.lockutils [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.612 228252 DEBUG oslo_concurrency.lockutils [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:37:01 localhost nova_compute[228248]: 2025-12-05 09:37:01.612 228252 DEBUG oslo_concurrency.lockutils [None req-fbd947e5-2b68-44d9-bebe-639935a70282 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:37:02 localhost systemd[1]: libpod-ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06.scope: Deactivated successfully. Dec 5 04:37:02 localhost journal[202456]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 5 04:37:02 localhost journal[202456]: hostname: np0005546419.localdomain Dec 5 04:37:02 localhost journal[202456]: End of file while reading data: Input/output error Dec 5 04:37:02 localhost systemd[1]: libpod-ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06.scope: Consumed 4.759s CPU time. Dec 5 04:37:02 localhost podman[229190]: 2025-12-05 09:37:02.011704864 +0000 UTC m=+1.514998400 container died ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3) Dec 5 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06-userdata-shm.mount: Deactivated successfully. Dec 5 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af-merged.mount: Deactivated successfully. Dec 5 04:37:02 localhost podman[229190]: 2025-12-05 09:37:02.068477161 +0000 UTC m=+1.571770667 container cleanup ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute) Dec 5 04:37:02 localhost podman[229190]: nova_compute Dec 5 04:37:02 localhost podman[229233]: error opening file `/run/crun/ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06/status`: No such file or directory Dec 5 04:37:02 localhost podman[229220]: 2025-12-05 09:37:02.174727759 +0000 UTC m=+0.074299497 container cleanup ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 04:37:02 localhost podman[229220]: nova_compute Dec 5 04:37:02 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 5 04:37:02 localhost systemd[1]: Stopped nova_compute container. Dec 5 04:37:02 localhost systemd[1]: Starting nova_compute container... Dec 5 04:37:02 localhost systemd[1]: Started libcrun container. Dec 5 04:37:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:02 localhost podman[229235]: 2025-12-05 09:37:02.329800749 +0000 UTC m=+0.117375012 container init ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 04:37:02 localhost podman[229235]: 2025-12-05 09:37:02.340315902 +0000 UTC m=+0.127890165 container start ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 5 04:37:02 localhost podman[229235]: nova_compute Dec 5 04:37:02 localhost nova_compute[229251]: + sudo -E kolla_set_configs Dec 5 04:37:02 localhost systemd[1]: Started nova_compute container. Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Validating config file Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying service configuration files Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /etc/ceph Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Creating directory /etc/ceph Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/ceph Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Writing out command to execute Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:37:02 localhost nova_compute[229251]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 5 04:37:02 localhost nova_compute[229251]: ++ cat /run_command Dec 5 04:37:02 localhost nova_compute[229251]: + CMD=nova-compute Dec 5 04:37:02 localhost nova_compute[229251]: + ARGS= Dec 5 04:37:02 localhost nova_compute[229251]: + sudo kolla_copy_cacerts Dec 5 04:37:02 localhost nova_compute[229251]: + [[ ! -n '' ]] Dec 5 04:37:02 localhost nova_compute[229251]: + . kolla_extend_start Dec 5 04:37:02 localhost nova_compute[229251]: Running command: 'nova-compute' Dec 5 04:37:02 localhost nova_compute[229251]: + echo 'Running command: '\''nova-compute'\''' Dec 5 04:37:02 localhost nova_compute[229251]: + umask 0022 Dec 5 04:37:02 localhost nova_compute[229251]: + exec nova-compute Dec 5 04:37:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:37:02 localhost podman[229280]: 2025-12-05 09:37:02.947500768 +0000 UTC m=+0.082631792 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 04:37:02 localhost podman[229280]: 2025-12-05 09:37:02.9586347 +0000 UTC m=+0.093765734 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 04:37:02 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:37:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:37:03.882 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:37:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:37:03.882 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:37:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:37:03.883 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.125 229255 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.125 229255 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.126 229255 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.126 229255 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.239 229255 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.261 229255 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.261 229255 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 5 04:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43900 DF PROTO=TCP SPT=58662 DPT=9105 SEQ=2822481205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC502AC50000000001030307) Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.656 229255 INFO nova.virt.driver [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.765 229255 INFO nova.compute.provider_config [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.773 229255 WARNING nova.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.774 229255 DEBUG oslo_concurrency.lockutils [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.774 229255 DEBUG oslo_concurrency.lockutils [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.774 229255 DEBUG oslo_concurrency.lockutils [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.774 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.774 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.775 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.776 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] console_host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.777 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.778 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.778 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.778 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.778 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.778 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.778 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.779 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.780 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.781 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.781 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.781 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.781 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.781 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.781 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.782 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.783 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.784 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.784 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.784 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.784 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.784 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.784 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.785 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.786 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.787 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.788 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.789 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.790 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.791 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.792 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.793 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.794 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.795 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.796 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.797 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.798 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.799 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.800 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.801 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.802 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.803 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.804 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.805 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.806 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.807 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.808 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.809 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.810 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.811 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.812 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.813 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.814 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.815 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.816 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.817 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.818 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.819 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.820 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.821 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.822 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.823 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.823 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.823 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.823 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.823 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.823 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.824 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.825 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.826 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.827 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.828 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.829 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.830 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.831 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.832 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.833 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.834 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.835 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.836 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.837 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.838 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.839 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.840 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.841 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.842 229255 WARNING oslo_config.cfg [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 5 04:37:04 localhost nova_compute[229251]: live_migration_uri is deprecated for removal in favor of two other options that Dec 5 04:37:04 localhost nova_compute[229251]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 5 04:37:04 localhost nova_compute[229251]: and ``live_migration_inbound_addr`` respectively. Dec 5 04:37:04 localhost nova_compute[229251]: ). Its value may be silently ignored in the future.#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.842 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.842 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.842 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.842 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.842 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.843 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rbd_secret_uuid = 79feddb1-4bfc-557f-83b9-0d57c9f66c1b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.844 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.845 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.846 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.847 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.848 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.849 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.850 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.851 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.852 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.853 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.854 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.855 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.856 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.857 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.858 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.859 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.860 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.861 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.862 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.863 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.864 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.865 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.866 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.867 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.868 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.869 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.870 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.871 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.872 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.873 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.874 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.875 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.876 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.877 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.878 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.879 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.880 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.881 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.882 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.883 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.884 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.885 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.886 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.887 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.888 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.889 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.890 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.891 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.892 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.893 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.894 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.895 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.896 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.897 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.898 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.899 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.899 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.899 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.899 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.899 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.899 229255 DEBUG oslo_service.service [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.901 229255 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.913 229255 INFO nova.virt.node [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Determined node identity 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from /var/lib/nova/compute_id#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.913 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.914 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.914 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.914 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.923 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.925 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.926 229255 INFO nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.935 229255 INFO nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Libvirt host capabilities Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: 2b745fc2-5ba6-425d-9fc8-d9117ea29cbc Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: x86_64 Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome-v4 Dec 5 04:37:04 localhost nova_compute[229251]: AMD Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: tcp Dec 5 04:37:04 localhost nova_compute[229251]: rdma Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: 16116612 Dec 5 04:37:04 localhost nova_compute[229251]: 4029153 Dec 5 04:37:04 localhost nova_compute[229251]: 0 Dec 5 04:37:04 localhost nova_compute[229251]: 0 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: selinux Dec 5 04:37:04 localhost nova_compute[229251]: 0 Dec 5 04:37:04 localhost nova_compute[229251]: system_u:system_r:svirt_t:s0 Dec 5 04:37:04 localhost nova_compute[229251]: system_u:system_r:svirt_tcg_t:s0 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: dac Dec 5 04:37:04 localhost nova_compute[229251]: 0 Dec 5 04:37:04 localhost nova_compute[229251]: +107:+107 Dec 5 04:37:04 localhost nova_compute[229251]: +107:+107 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: hvm Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: 32 Dec 5 04:37:04 localhost nova_compute[229251]: /usr/libexec/qemu-kvm Dec 5 04:37:04 localhost nova_compute[229251]: pc-i440fx-rhel7.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.8.0 Dec 5 04:37:04 localhost nova_compute[229251]: q35 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.4.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.5.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.3.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel7.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.4.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.2.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.2.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.0.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.0.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.1.0 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: hvm Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: 64 Dec 5 04:37:04 localhost nova_compute[229251]: /usr/libexec/qemu-kvm Dec 5 04:37:04 localhost nova_compute[229251]: pc-i440fx-rhel7.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.8.0 Dec 5 04:37:04 localhost nova_compute[229251]: q35 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.4.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.5.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.3.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel7.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.4.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.2.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.2.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel9.0.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.0.0 Dec 5 04:37:04 localhost nova_compute[229251]: pc-q35-rhel8.1.0 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: #033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.938 229255 DEBUG nova.virt.libvirt.volume.mount [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.943 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 5 04:37:04 localhost nova_compute[229251]: 2025-12-05 09:37:04.946 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: /usr/libexec/qemu-kvm Dec 5 04:37:04 localhost nova_compute[229251]: kvm Dec 5 04:37:04 localhost nova_compute[229251]: pc-i440fx-rhel7.6.0 Dec 5 04:37:04 localhost nova_compute[229251]: i686 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: rom Dec 5 04:37:04 localhost nova_compute[229251]: pflash Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: yes Dec 5 04:37:04 localhost nova_compute[229251]: no Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: no Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: on Dec 5 04:37:04 localhost nova_compute[229251]: off Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: on Dec 5 04:37:04 localhost nova_compute[229251]: off Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:04 localhost nova_compute[229251]: AMD Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: 486 Dec 5 04:37:04 localhost nova_compute[229251]: 486-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-noTSX Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-noTSX-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Broadwell-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server-noTSX Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cascadelake-Server-v5 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Conroe Dec 5 04:37:04 localhost nova_compute[229251]: Conroe-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Cooperlake Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cooperlake-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Cooperlake-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Denverton Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Denverton-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Denverton-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Denverton-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dhyana Dec 5 04:37:04 localhost nova_compute[229251]: Dhyana-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dhyana-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Genoa Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Genoa-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-IBPB Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Milan Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Milan-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Milan-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-Rome-v4 Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-v1 Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-v2 Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: EPYC-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: GraniteRapids Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: GraniteRapids-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: GraniteRapids-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-noTSX Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-noTSX-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Haswell-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-noTSX Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v5 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v6 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Icelake-Server-v7 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: IvyBridge Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: IvyBridge-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: IvyBridge-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: IvyBridge-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: KnightsMill Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: KnightsMill-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Nehalem Dec 5 04:37:04 localhost nova_compute[229251]: Nehalem-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Nehalem-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Nehalem-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G1 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G1-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G2 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G2-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G3 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G3-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G4-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G5 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Opteron_G5-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Penryn Dec 5 04:37:04 localhost nova_compute[229251]: Penryn-v1 Dec 5 04:37:04 localhost nova_compute[229251]: SandyBridge Dec 5 04:37:04 localhost nova_compute[229251]: SandyBridge-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: SandyBridge-v1 Dec 5 04:37:04 localhost nova_compute[229251]: SandyBridge-v2 Dec 5 04:37:04 localhost nova_compute[229251]: SapphireRapids Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: SapphireRapids-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: SapphireRapids-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: SapphireRapids-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: SierraForest Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: SierraForest-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client-noTSX-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Client-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-noTSX-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Skylake-Server-v5 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Snowridge Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Snowridge-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Snowridge-v2 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Snowridge-v3 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Snowridge-v4 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Westmere Dec 5 04:37:04 localhost nova_compute[229251]: Westmere-IBRS Dec 5 04:37:04 localhost nova_compute[229251]: Westmere-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Westmere-v2 Dec 5 04:37:04 localhost nova_compute[229251]: athlon Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: athlon-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: core2duo Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: core2duo-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: coreduo Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: coreduo-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: kvm32 Dec 5 04:37:04 localhost nova_compute[229251]: kvm32-v1 Dec 5 04:37:04 localhost nova_compute[229251]: kvm64 Dec 5 04:37:04 localhost nova_compute[229251]: kvm64-v1 Dec 5 04:37:04 localhost nova_compute[229251]: n270 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: n270-v1 Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: Dec 5 04:37:04 localhost nova_compute[229251]: pentium Dec 5 04:37:04 localhost nova_compute[229251]: pentium-v1 Dec 5 04:37:04 localhost nova_compute[229251]: pentium2 Dec 5 04:37:04 localhost nova_compute[229251]: pentium2-v1 Dec 5 04:37:04 localhost nova_compute[229251]: pentium3 Dec 5 04:37:04 localhost nova_compute[229251]: pentium3-v1 Dec 5 04:37:04 localhost nova_compute[229251]: phenom Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: phenom-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu32 Dec 5 04:37:05 localhost nova_compute[229251]: qemu32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: anonymous Dec 5 04:37:05 localhost nova_compute[229251]: memfd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: disk Dec 5 04:37:05 localhost nova_compute[229251]: cdrom Dec 5 04:37:05 localhost nova_compute[229251]: floppy Dec 5 04:37:05 localhost nova_compute[229251]: lun Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: ide Dec 5 04:37:05 localhost nova_compute[229251]: fdc Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: sata Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: vnc Dec 5 04:37:05 localhost nova_compute[229251]: egl-headless Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: subsystem Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: mandatory Dec 5 04:37:05 localhost nova_compute[229251]: requisite Dec 5 04:37:05 localhost nova_compute[229251]: optional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: pci Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: random Dec 5 04:37:05 localhost nova_compute[229251]: egd Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: path Dec 5 04:37:05 localhost nova_compute[229251]: handle Dec 5 04:37:05 localhost nova_compute[229251]: virtiofs Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tpm-tis Dec 5 04:37:05 localhost nova_compute[229251]: tpm-crb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: emulator Dec 5 04:37:05 localhost nova_compute[229251]: external Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 2.0 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: passt Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: isa Dec 5 04:37:05 localhost nova_compute[229251]: hyperv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: null Dec 5 04:37:05 localhost nova_compute[229251]: vc Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: dev Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: pipe Dec 5 04:37:05 localhost nova_compute[229251]: stdio Dec 5 04:37:05 localhost nova_compute[229251]: udp Dec 5 04:37:05 localhost nova_compute[229251]: tcp Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: qemu-vdagent Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: relaxed Dec 5 04:37:05 localhost nova_compute[229251]: vapic Dec 5 04:37:05 localhost nova_compute[229251]: spinlocks Dec 5 04:37:05 localhost nova_compute[229251]: vpindex Dec 5 04:37:05 localhost nova_compute[229251]: runtime Dec 5 04:37:05 localhost nova_compute[229251]: synic Dec 5 04:37:05 localhost nova_compute[229251]: stimer Dec 5 04:37:05 localhost nova_compute[229251]: reset Dec 5 04:37:05 localhost nova_compute[229251]: vendor_id Dec 5 04:37:05 localhost nova_compute[229251]: frequencies Dec 5 04:37:05 localhost nova_compute[229251]: reenlightenment Dec 5 04:37:05 localhost nova_compute[229251]: tlbflush Dec 5 04:37:05 localhost nova_compute[229251]: ipi Dec 5 04:37:05 localhost nova_compute[229251]: avic Dec 5 04:37:05 localhost nova_compute[229251]: emsr_bitmap Dec 5 04:37:05 localhost nova_compute[229251]: xmm_input Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 4095 Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Linux KVM Hv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tdx Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:04.952 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: /usr/libexec/qemu-kvm Dec 5 04:37:05 localhost nova_compute[229251]: kvm Dec 5 04:37:05 localhost nova_compute[229251]: pc-q35-rhel9.8.0 Dec 5 04:37:05 localhost nova_compute[229251]: i686 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: rom Dec 5 04:37:05 localhost nova_compute[229251]: pflash Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: yes Dec 5 04:37:05 localhost nova_compute[229251]: no Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: no Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:05 localhost nova_compute[229251]: AMD Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 486 Dec 5 04:37:05 localhost nova_compute[229251]: 486-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Conroe Dec 5 04:37:05 localhost nova_compute[229251]: Conroe-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Genoa Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Genoa-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-IBPB Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v4 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v1 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v2 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v6 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v7 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: KnightsMill Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: KnightsMill-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G1-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G2 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G2-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G3 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G3-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G4-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G5-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Penryn Dec 5 04:37:05 localhost nova_compute[229251]: Penryn-v1 Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SierraForest Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SierraForest-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Westmere Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-v2 Dec 5 04:37:05 localhost nova_compute[229251]: athlon Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: athlon-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: core2duo Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: core2duo-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: coreduo Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: coreduo-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: kvm32 Dec 5 04:37:05 localhost nova_compute[229251]: kvm32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: kvm64 Dec 5 04:37:05 localhost nova_compute[229251]: kvm64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: n270 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: n270-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pentium Dec 5 04:37:05 localhost nova_compute[229251]: pentium-v1 Dec 5 04:37:05 localhost nova_compute[229251]: pentium2 Dec 5 04:37:05 localhost nova_compute[229251]: pentium2-v1 Dec 5 04:37:05 localhost nova_compute[229251]: pentium3 Dec 5 04:37:05 localhost nova_compute[229251]: pentium3-v1 Dec 5 04:37:05 localhost nova_compute[229251]: phenom Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: phenom-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu32 Dec 5 04:37:05 localhost nova_compute[229251]: qemu32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: anonymous Dec 5 04:37:05 localhost nova_compute[229251]: memfd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: disk Dec 5 04:37:05 localhost nova_compute[229251]: cdrom Dec 5 04:37:05 localhost nova_compute[229251]: floppy Dec 5 04:37:05 localhost nova_compute[229251]: lun Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: fdc Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: sata Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: vnc Dec 5 04:37:05 localhost nova_compute[229251]: egl-headless Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: subsystem Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: mandatory Dec 5 04:37:05 localhost nova_compute[229251]: requisite Dec 5 04:37:05 localhost nova_compute[229251]: optional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: pci Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: random Dec 5 04:37:05 localhost nova_compute[229251]: egd Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: path Dec 5 04:37:05 localhost nova_compute[229251]: handle Dec 5 04:37:05 localhost nova_compute[229251]: virtiofs Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tpm-tis Dec 5 04:37:05 localhost nova_compute[229251]: tpm-crb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: emulator Dec 5 04:37:05 localhost nova_compute[229251]: external Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 2.0 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: passt Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: isa Dec 5 04:37:05 localhost nova_compute[229251]: hyperv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: null Dec 5 04:37:05 localhost nova_compute[229251]: vc Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: dev Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: pipe Dec 5 04:37:05 localhost nova_compute[229251]: stdio Dec 5 04:37:05 localhost nova_compute[229251]: udp Dec 5 04:37:05 localhost nova_compute[229251]: tcp Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: qemu-vdagent Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: relaxed Dec 5 04:37:05 localhost nova_compute[229251]: vapic Dec 5 04:37:05 localhost nova_compute[229251]: spinlocks Dec 5 04:37:05 localhost nova_compute[229251]: vpindex Dec 5 04:37:05 localhost nova_compute[229251]: runtime Dec 5 04:37:05 localhost nova_compute[229251]: synic Dec 5 04:37:05 localhost nova_compute[229251]: stimer Dec 5 04:37:05 localhost nova_compute[229251]: reset Dec 5 04:37:05 localhost nova_compute[229251]: vendor_id Dec 5 04:37:05 localhost nova_compute[229251]: frequencies Dec 5 04:37:05 localhost nova_compute[229251]: reenlightenment Dec 5 04:37:05 localhost nova_compute[229251]: tlbflush Dec 5 04:37:05 localhost nova_compute[229251]: ipi Dec 5 04:37:05 localhost nova_compute[229251]: avic Dec 5 04:37:05 localhost nova_compute[229251]: emsr_bitmap Dec 5 04:37:05 localhost nova_compute[229251]: xmm_input Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 4095 Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Linux KVM Hv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tdx Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:04.982 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:04.985 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: /usr/libexec/qemu-kvm Dec 5 04:37:05 localhost nova_compute[229251]: kvm Dec 5 04:37:05 localhost nova_compute[229251]: pc-q35-rhel9.8.0 Dec 5 04:37:05 localhost nova_compute[229251]: x86_64 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: efi Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 5 04:37:05 localhost nova_compute[229251]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 5 04:37:05 localhost nova_compute[229251]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 5 04:37:05 localhost nova_compute[229251]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: rom Dec 5 04:37:05 localhost nova_compute[229251]: pflash Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: yes Dec 5 04:37:05 localhost nova_compute[229251]: no Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: yes Dec 5 04:37:05 localhost nova_compute[229251]: no Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:05 localhost nova_compute[229251]: AMD Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 486 Dec 5 04:37:05 localhost nova_compute[229251]: 486-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Conroe Dec 5 04:37:05 localhost nova_compute[229251]: Conroe-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Genoa Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Genoa-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-IBPB Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v4 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v1 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v2 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v6 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v7 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: KnightsMill Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: KnightsMill-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G1-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G2 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G2-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G3 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G3-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G4-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G5-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Penryn Dec 5 04:37:05 localhost nova_compute[229251]: Penryn-v1 Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SierraForest Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SierraForest-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Westmere Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-v2 Dec 5 04:37:05 localhost nova_compute[229251]: athlon Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: athlon-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: core2duo Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: core2duo-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: coreduo Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: coreduo-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: kvm32 Dec 5 04:37:05 localhost nova_compute[229251]: kvm32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: kvm64 Dec 5 04:37:05 localhost nova_compute[229251]: kvm64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: n270 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: n270-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pentium Dec 5 04:37:05 localhost nova_compute[229251]: pentium-v1 Dec 5 04:37:05 localhost nova_compute[229251]: pentium2 Dec 5 04:37:05 localhost nova_compute[229251]: pentium2-v1 Dec 5 04:37:05 localhost nova_compute[229251]: pentium3 Dec 5 04:37:05 localhost nova_compute[229251]: pentium3-v1 Dec 5 04:37:05 localhost nova_compute[229251]: phenom Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: phenom-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu32 Dec 5 04:37:05 localhost nova_compute[229251]: qemu32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: anonymous Dec 5 04:37:05 localhost nova_compute[229251]: memfd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: disk Dec 5 04:37:05 localhost nova_compute[229251]: cdrom Dec 5 04:37:05 localhost nova_compute[229251]: floppy Dec 5 04:37:05 localhost nova_compute[229251]: lun Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: fdc Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: sata Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: vnc Dec 5 04:37:05 localhost nova_compute[229251]: egl-headless Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: subsystem Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: mandatory Dec 5 04:37:05 localhost nova_compute[229251]: requisite Dec 5 04:37:05 localhost nova_compute[229251]: optional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: pci Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: random Dec 5 04:37:05 localhost nova_compute[229251]: egd Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: path Dec 5 04:37:05 localhost nova_compute[229251]: handle Dec 5 04:37:05 localhost nova_compute[229251]: virtiofs Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tpm-tis Dec 5 04:37:05 localhost nova_compute[229251]: tpm-crb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: emulator Dec 5 04:37:05 localhost nova_compute[229251]: external Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 2.0 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: passt Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: isa Dec 5 04:37:05 localhost nova_compute[229251]: hyperv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: null Dec 5 04:37:05 localhost nova_compute[229251]: vc Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: dev Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: pipe Dec 5 04:37:05 localhost nova_compute[229251]: stdio Dec 5 04:37:05 localhost nova_compute[229251]: udp Dec 5 04:37:05 localhost nova_compute[229251]: tcp Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: qemu-vdagent Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: relaxed Dec 5 04:37:05 localhost nova_compute[229251]: vapic Dec 5 04:37:05 localhost nova_compute[229251]: spinlocks Dec 5 04:37:05 localhost nova_compute[229251]: vpindex Dec 5 04:37:05 localhost nova_compute[229251]: runtime Dec 5 04:37:05 localhost nova_compute[229251]: synic Dec 5 04:37:05 localhost nova_compute[229251]: stimer Dec 5 04:37:05 localhost nova_compute[229251]: reset Dec 5 04:37:05 localhost nova_compute[229251]: vendor_id Dec 5 04:37:05 localhost nova_compute[229251]: frequencies Dec 5 04:37:05 localhost nova_compute[229251]: reenlightenment Dec 5 04:37:05 localhost nova_compute[229251]: tlbflush Dec 5 04:37:05 localhost nova_compute[229251]: ipi Dec 5 04:37:05 localhost nova_compute[229251]: avic Dec 5 04:37:05 localhost nova_compute[229251]: emsr_bitmap Dec 5 04:37:05 localhost nova_compute[229251]: xmm_input Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 4095 Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Linux KVM Hv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tdx Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.044 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: /usr/libexec/qemu-kvm Dec 5 04:37:05 localhost nova_compute[229251]: kvm Dec 5 04:37:05 localhost nova_compute[229251]: pc-i440fx-rhel7.6.0 Dec 5 04:37:05 localhost nova_compute[229251]: x86_64 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: rom Dec 5 04:37:05 localhost nova_compute[229251]: pflash Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: yes Dec 5 04:37:05 localhost nova_compute[229251]: no Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: no Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:05 localhost nova_compute[229251]: AMD Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 486 Dec 5 04:37:05 localhost nova_compute[229251]: 486-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Broadwell-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cascadelake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Conroe Dec 5 04:37:05 localhost nova_compute[229251]: Conroe-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Cooperlake-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Denverton-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dhyana-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Genoa Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Genoa-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-IBPB Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Milan-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-Rome-v4 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v1 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v2 Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: EPYC-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: GraniteRapids-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Haswell-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-noTSX Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v6 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Icelake-Server-v7 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: IvyBridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: KnightsMill Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: KnightsMill-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Nehalem-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G1-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G2 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G2-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G3 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G3-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G4-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Opteron_G5-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Penryn Dec 5 04:37:05 localhost nova_compute[229251]: Penryn-v1 Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: SandyBridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SapphireRapids-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SierraForest Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: SierraForest-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Client-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-noTSX-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Skylake-Server-v5 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v2 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v3 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Snowridge-v4 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Westmere Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-IBRS Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Westmere-v2 Dec 5 04:37:05 localhost nova_compute[229251]: athlon Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: athlon-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: core2duo Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: core2duo-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: coreduo Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: coreduo-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: kvm32 Dec 5 04:37:05 localhost nova_compute[229251]: kvm32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: kvm64 Dec 5 04:37:05 localhost nova_compute[229251]: kvm64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: n270 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: n270-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pentium Dec 5 04:37:05 localhost nova_compute[229251]: pentium-v1 Dec 5 04:37:05 localhost nova_compute[229251]: pentium2 Dec 5 04:37:05 localhost nova_compute[229251]: pentium2-v1 Dec 5 04:37:05 localhost nova_compute[229251]: pentium3 Dec 5 04:37:05 localhost nova_compute[229251]: pentium3-v1 Dec 5 04:37:05 localhost nova_compute[229251]: phenom Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: phenom-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu32 Dec 5 04:37:05 localhost nova_compute[229251]: qemu32-v1 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64 Dec 5 04:37:05 localhost nova_compute[229251]: qemu64-v1 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: anonymous Dec 5 04:37:05 localhost nova_compute[229251]: memfd Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: disk Dec 5 04:37:05 localhost nova_compute[229251]: cdrom Dec 5 04:37:05 localhost nova_compute[229251]: floppy Dec 5 04:37:05 localhost nova_compute[229251]: lun Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: ide Dec 5 04:37:05 localhost nova_compute[229251]: fdc Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: sata Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: vnc Dec 5 04:37:05 localhost nova_compute[229251]: egl-headless Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: subsystem Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: mandatory Dec 5 04:37:05 localhost nova_compute[229251]: requisite Dec 5 04:37:05 localhost nova_compute[229251]: optional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: pci Dec 5 04:37:05 localhost nova_compute[229251]: scsi Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: virtio Dec 5 04:37:05 localhost nova_compute[229251]: virtio-transitional Dec 5 04:37:05 localhost nova_compute[229251]: virtio-non-transitional Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: random Dec 5 04:37:05 localhost nova_compute[229251]: egd Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: path Dec 5 04:37:05 localhost nova_compute[229251]: handle Dec 5 04:37:05 localhost nova_compute[229251]: virtiofs Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tpm-tis Dec 5 04:37:05 localhost nova_compute[229251]: tpm-crb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: emulator Dec 5 04:37:05 localhost nova_compute[229251]: external Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 2.0 Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: usb Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: qemu Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: builtin Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: default Dec 5 04:37:05 localhost nova_compute[229251]: passt Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: isa Dec 5 04:37:05 localhost nova_compute[229251]: hyperv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: null Dec 5 04:37:05 localhost nova_compute[229251]: vc Dec 5 04:37:05 localhost nova_compute[229251]: pty Dec 5 04:37:05 localhost nova_compute[229251]: dev Dec 5 04:37:05 localhost nova_compute[229251]: file Dec 5 04:37:05 localhost nova_compute[229251]: pipe Dec 5 04:37:05 localhost nova_compute[229251]: stdio Dec 5 04:37:05 localhost nova_compute[229251]: udp Dec 5 04:37:05 localhost nova_compute[229251]: tcp Dec 5 04:37:05 localhost nova_compute[229251]: unix Dec 5 04:37:05 localhost nova_compute[229251]: qemu-vdagent Dec 5 04:37:05 localhost nova_compute[229251]: dbus Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: relaxed Dec 5 04:37:05 localhost nova_compute[229251]: vapic Dec 5 04:37:05 localhost nova_compute[229251]: spinlocks Dec 5 04:37:05 localhost nova_compute[229251]: vpindex Dec 5 04:37:05 localhost nova_compute[229251]: runtime Dec 5 04:37:05 localhost nova_compute[229251]: synic Dec 5 04:37:05 localhost nova_compute[229251]: stimer Dec 5 04:37:05 localhost nova_compute[229251]: reset Dec 5 04:37:05 localhost nova_compute[229251]: vendor_id Dec 5 04:37:05 localhost nova_compute[229251]: frequencies Dec 5 04:37:05 localhost nova_compute[229251]: reenlightenment Dec 5 04:37:05 localhost nova_compute[229251]: tlbflush Dec 5 04:37:05 localhost nova_compute[229251]: ipi Dec 5 04:37:05 localhost nova_compute[229251]: avic Dec 5 04:37:05 localhost nova_compute[229251]: emsr_bitmap Dec 5 04:37:05 localhost nova_compute[229251]: xmm_input Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: 4095 Dec 5 04:37:05 localhost nova_compute[229251]: on Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: off Dec 5 04:37:05 localhost nova_compute[229251]: Linux KVM Hv Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: tdx Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: Dec 5 04:37:05 localhost nova_compute[229251]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.107 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.108 229255 INFO nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Secure Boot support detected#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.111 229255 INFO nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.112 229255 INFO nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.126 229255 DEBUG nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.155 229255 INFO nova.virt.node [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Determined node identity 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from /var/lib/nova/compute_id#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.181 229255 DEBUG nova.compute.manager [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Verified node 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 matches my host np0005546419.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.228 229255 DEBUG nova.compute.manager [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.233 229255 DEBUG nova.virt.libvirt.vif [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T08:35:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005546419.localdomain',hostname='test',id=2,image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T08:35:39Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005546419.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e6ca8a92050741d3a93772e6c1b0d704',ramdisk_id='',reservation_id='r-99d0dddi',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-05T08:35:39Z,user_data=None,user_id='52d0a54dc45b4c4caaba721ba3202150',uuid=96a47a1c-57c7-4bb1-aecc-33db976db8c7,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.234 229255 DEBUG nova.network.os_vif_util [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Converting VIF {"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.235 229255 DEBUG nova.network.os_vif_util [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.236 229255 DEBUG os_vif [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.283 229255 DEBUG ovsdbapp.backend.ovs_idl [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.283 229255 DEBUG ovsdbapp.backend.ovs_idl [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.284 229255 DEBUG ovsdbapp.backend.ovs_idl [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.284 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.284 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.285 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.285 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.286 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.290 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.305 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.305 229255 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.305 229255 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.306 229255 INFO oslo.privsep.daemon [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp6u1sa3tf/privsep.sock']#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.623 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:05 localhost python3.9[229421]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.911 229255 INFO oslo.privsep.daemon [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.803 229422 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.806 229422 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.807 229422 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 5 04:37:05 localhost nova_compute[229251]: 2025-12-05 09:37:05.807 229422 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229422#033[00m Dec 5 04:37:06 localhost systemd[1]: Started libpod-conmon-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50.scope. Dec 5 04:37:06 localhost systemd[1]: Started libcrun container. Dec 5 04:37:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 04:37:06 localhost podman[229448]: 2025-12-05 09:37:06.105515415 +0000 UTC m=+0.137888163 container init cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:37:06 localhost podman[229448]: 2025-12-05 09:37:06.119190296 +0000 UTC m=+0.151563074 container start cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 04:37:06 localhost python3.9[229421]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Applying nova statedir ownership Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7 already 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7 to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7/console.log Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/c71d7fb936a828c57128ce72f168800f43e32781 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-c71d7fb936a828c57128ce72f168800f43e32781 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 5 04:37:06 localhost nova_compute_init[229471]: INFO:nova_statedir:Nova statedir ownership complete Dec 5 04:37:06 localhost systemd[1]: libpod-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50.scope: Deactivated successfully. Dec 5 04:37:06 localhost podman[229472]: 2025-12-05 09:37:06.198314129 +0000 UTC m=+0.060372228 container died cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.202 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.203 229255 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2f95d81-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.203 229255 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2f95d81-23, col_values=(('external_ids', {'iface-id': 'c2f95d81-2317-46b9-8146-596eac8f9acb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:e6:3a', 'vm-uuid': '96a47a1c-57c7-4bb1-aecc-33db976db8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.204 229255 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.205 229255 INFO os_vif [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23')#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.206 229255 DEBUG nova.compute.manager [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.212 229255 DEBUG nova.compute.manager [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.213 229255 INFO nova.compute.manager [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 5 04:37:06 localhost podman[229485]: 2025-12-05 09:37:06.270600013 +0000 UTC m=+0.073304697 container cleanup cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:37:06 localhost systemd[1]: libpod-conmon-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50.scope: Deactivated successfully. Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.279 229255 DEBUG oslo_concurrency.lockutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.280 229255 DEBUG oslo_concurrency.lockutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.280 229255 DEBUG oslo_concurrency.lockutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.280 229255 DEBUG nova.compute.resource_tracker [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.281 229255 DEBUG oslo_concurrency.processutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.733 229255 DEBUG oslo_concurrency.processutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.803 229255 DEBUG nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:37:06 localhost nova_compute[229251]: 2025-12-05 09:37:06.803 229255 DEBUG nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.013 229255 WARNING nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.014 229255 DEBUG nova.compute.resource_tracker [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12910MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.014 229255 DEBUG oslo_concurrency.lockutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.015 229255 DEBUG oslo_concurrency.lockutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7-merged.mount: Deactivated successfully. Dec 5 04:37:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50-userdata-shm.mount: Deactivated successfully. Dec 5 04:37:07 localhost systemd[1]: session-53.scope: Deactivated successfully. Dec 5 04:37:07 localhost systemd[1]: session-53.scope: Consumed 2min 8.566s CPU time. Dec 5 04:37:07 localhost systemd-logind[760]: Session 53 logged out. Waiting for processes to exit. Dec 5 04:37:07 localhost systemd-logind[760]: Removed session 53. Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.168 229255 DEBUG nova.compute.resource_tracker [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.169 229255 DEBUG nova.compute.resource_tracker [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.169 229255 DEBUG nova.compute.resource_tracker [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.233 229255 DEBUG nova.scheduler.client.report [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.255 229255 DEBUG nova.scheduler.client.report [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.256 229255 DEBUG nova.compute.provider_tree [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.287 229255 DEBUG nova.scheduler.client.report [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.331 229255 DEBUG nova.scheduler.client.report [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.372 229255 DEBUG oslo_concurrency.processutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.823 229255 DEBUG oslo_concurrency.processutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.830 229255 DEBUG nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 5 04:37:07 localhost nova_compute[229251]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.830 229255 INFO nova.virt.libvirt.host [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.832 229255 DEBUG nova.compute.provider_tree [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.832 229255 DEBUG nova.virt.libvirt.driver [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.866 229255 DEBUG nova.scheduler.client.report [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.893 229255 DEBUG nova.compute.resource_tracker [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.893 229255 DEBUG oslo_concurrency.lockutils [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.893 229255 DEBUG nova.service [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.922 229255 DEBUG nova.service [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 5 04:37:07 localhost nova_compute[229251]: 2025-12-05 09:37:07.922 229255 DEBUG nova.servicegroup.drivers.db [None req-793ab0d0-accf-4ecf-a004-7a53e04167b7 - - - - - -] DB_Driver: join new ServiceGroup member np0005546419.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 5 04:37:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:37:10 localhost systemd[1]: tmp-crun.lq6Cac.mount: Deactivated successfully. Dec 5 04:37:10 localhost podman[229574]: 2025-12-05 09:37:10.205695484 +0000 UTC m=+0.088266506 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:37:10 localhost podman[229574]: 2025-12-05 09:37:10.245665008 +0000 UTC m=+0.128235999 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 5 04:37:10 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:37:10 localhost nova_compute[229251]: 2025-12-05 09:37:10.288 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:10 localhost nova_compute[229251]: 2025-12-05 09:37:10.625 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5217 DF PROTO=TCP SPT=32914 DPT=9102 SEQ=11812726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5045C60000000001030307) Dec 5 04:37:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5218 DF PROTO=TCP SPT=32914 DPT=9102 SEQ=11812726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5049C50000000001030307) Dec 5 04:37:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43901 DF PROTO=TCP SPT=58662 DPT=9105 SEQ=2822481205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC504A450000000001030307) Dec 5 04:37:12 localhost sshd[229599]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:37:12 localhost systemd-logind[760]: New session 55 of user zuul. Dec 5 04:37:12 localhost systemd[1]: Started Session 55 of User zuul. Dec 5 04:37:12 localhost podman[229602]: 2025-12-05 09:37:12.791587217 +0000 UTC m=+0.107073702 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:37:12 localhost podman[229602]: 2025-12-05 09:37:12.826768526 +0000 UTC m=+0.142255041 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 04:37:12 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:37:12 localhost nova_compute[229251]: 2025-12-05 09:37:12.926 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:37:12 localhost nova_compute[229251]: 2025-12-05 09:37:12.957 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 04:37:12 localhost nova_compute[229251]: 2025-12-05 09:37:12.957 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:37:12 localhost nova_compute[229251]: 2025-12-05 09:37:12.957 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:37:12 localhost nova_compute[229251]: 2025-12-05 09:37:12.958 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:37:12 localhost nova_compute[229251]: 2025-12-05 09:37:12.992 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:37:13 localhost python3.9[229728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:37:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4279 DF PROTO=TCP SPT=43250 DPT=9100 SEQ=2951674772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5051A70000000001030307) Dec 5 04:37:15 localhost nova_compute[229251]: 2025-12-05 09:37:15.290 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:15 localhost python3.9[229842]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:37:15 localhost systemd[1]: Reloading. Dec 5 04:37:15 localhost systemd-sysv-generator[229871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:37:15 localhost systemd-rc-local-generator[229867]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:37:15 localhost nova_compute[229251]: 2025-12-05 09:37:15.628 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:15 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:16 localhost python3.9[229986]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:37:16 localhost network[230003]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:37:16 localhost network[230004]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:37:16 localhost network[230005]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:37:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4281 DF PROTO=TCP SPT=43250 DPT=9100 SEQ=2951674772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC505DC50000000001030307) Dec 5 04:37:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:37:20 localhost nova_compute[229251]: 2025-12-05 09:37:20.291 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26608 DF PROTO=TCP SPT=56576 DPT=9101 SEQ=1972610341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5069850000000001030307) Dec 5 04:37:20 localhost nova_compute[229251]: 2025-12-05 09:37:20.635 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:21 localhost python3.9[230240]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:37:23 localhost python3.9[230351]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:23 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Dec 5 04:37:23 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 04:37:23 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:37:23 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:37:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53775 DF PROTO=TCP SPT=55438 DPT=9101 SEQ=3208807916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5076450000000001030307) Dec 5 04:37:24 localhost python3.9[230462]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:25 localhost nova_compute[229251]: 2025-12-05 09:37:25.293 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:25 localhost nova_compute[229251]: 2025-12-05 09:37:25.635 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:25 localhost python3.9[230572]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:37:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26610 DF PROTO=TCP SPT=56576 DPT=9101 SEQ=1972610341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5081450000000001030307) Dec 5 04:37:26 localhost python3.9[230682]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:37:27 localhost python3.9[230792]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:37:27 localhost systemd[1]: Reloading. Dec 5 04:37:27 localhost systemd-sysv-generator[230817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:37:27 localhost systemd-rc-local-generator[230813]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:28 localhost python3.9[230937]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:37:29 localhost python3.9[231048]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:37:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4283 DF PROTO=TCP SPT=43250 DPT=9100 SEQ=2951674772 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC508E460000000001030307) Dec 5 04:37:30 localhost python3.9[231156]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:37:30 localhost nova_compute[229251]: 2025-12-05 09:37:30.295 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:30 localhost nova_compute[229251]: 2025-12-05 09:37:30.667 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:30 localhost python3.9[231266]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:31 localhost python3.9[231352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927450.3500028-359-182825852734967/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=d91e272d7999cff44eef71642a0d81ac747d6d7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:37:32 localhost python3.9[231462]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Dec 5 04:37:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:37:33 localhost systemd[1]: tmp-crun.05D32F.mount: Deactivated successfully. Dec 5 04:37:33 localhost podman[231537]: 2025-12-05 09:37:33.374570077 +0000 UTC m=+0.256677643 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Dec 5 04:37:33 localhost podman[231537]: 2025-12-05 09:37:33.389559227 +0000 UTC m=+0.271666793 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2) Dec 5 04:37:33 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:37:33 localhost python3.9[231583]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Dec 5 04:37:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27563 DF PROTO=TCP SPT=51606 DPT=9105 SEQ=3096518751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC509FC60000000001030307) Dec 5 04:37:34 localhost python3.9[231704]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 5 04:37:35 localhost nova_compute[229251]: 2025-12-05 09:37:35.297 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9543 DF PROTO=TCP SPT=33716 DPT=9882 SEQ=2405688201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50A4450000000001030307) Dec 5 04:37:35 localhost nova_compute[229251]: 2025-12-05 09:37:35.666 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:36 localhost python3.9[231820]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546419.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 5 04:37:38 localhost python3.9[231936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:38 localhost python3.9[232022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764927457.9576507-563-201638078229357/.source.conf _original_basename=ceilometer.conf follow=False checksum=a081cd36784ea0c14c85c5a4c92e4b020883418d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:39 localhost python3.9[232130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:39 localhost python3.9[232216]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764927458.9901416-563-79513047633/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:40 localhost nova_compute[229251]: 2025-12-05 09:37:40.300 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:40 localhost python3.9[232324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:40 localhost nova_compute[229251]: 2025-12-05 09:37:40.668 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:41 localhost python3.9[232410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764927460.1179247-563-213509319439505/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:37:41 localhost podman[232428]: 2025-12-05 09:37:41.203304112 +0000 UTC m=+0.073960035 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Dec 5 04:37:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20974 DF PROTO=TCP SPT=39384 DPT=9102 SEQ=1314178801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50BAF60000000001030307) Dec 5 04:37:41 localhost podman[232428]: 2025-12-05 09:37:41.27366773 +0000 UTC m=+0.144323703 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:37:41 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:37:41 localhost python3.9[232545]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:37:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20975 DF PROTO=TCP SPT=39384 DPT=9102 SEQ=1314178801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50BF050000000001030307) Dec 5 04:37:42 localhost python3.9[232653]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:37:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:37:42 localhost python3.9[232778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:42 localhost podman[232798]: 2025-12-05 09:37:42.987558789 +0000 UTC m=+0.094912926 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:37:43 localhost podman[232798]: 2025-12-05 09:37:43.019584713 +0000 UTC m=+0.126938870 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:37:43 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:37:43 localhost python3.9[232920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927462.5100534-740-71555068914186/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:43 localhost podman[233030]: 2025-12-05 09:37:43.728022616 +0000 UTC m=+0.086291867 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc.) Dec 5 04:37:43 localhost podman[233030]: 2025-12-05 09:37:43.832643393 +0000 UTC m=+0.190912674 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True) Dec 5 04:37:44 localhost python3.9[233103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33988 DF PROTO=TCP SPT=42082 DPT=9100 SEQ=3489403445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50C6D70000000001030307) Dec 5 04:37:44 localhost python3.9[233226]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:45 localhost python3.9[233367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:45 localhost nova_compute[229251]: 2025-12-05 09:37:45.303 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:45 localhost nova_compute[229251]: 2025-12-05 09:37:45.700 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:46 localhost python3.9[233483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927464.5832565-740-45203475991831/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:46 localhost python3.9[233591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:47 localhost python3.9[233677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927466.254096-740-77376708397667/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33990 DF PROTO=TCP SPT=42082 DPT=9100 SEQ=3489403445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50D2C50000000001030307) Dec 5 04:37:48 localhost python3.9[233785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:48 localhost python3.9[233871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927467.8203104-740-192467717824815/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:49 localhost python3.9[233979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:49 localhost python3.9[234065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927468.8873355-740-264784449214634/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:50 localhost nova_compute[229251]: 2025-12-05 09:37:50.304 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:50 localhost python3.9[234173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46831 DF PROTO=TCP SPT=52386 DPT=9101 SEQ=1444485537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50DEC50000000001030307) Dec 5 04:37:50 localhost nova_compute[229251]: 2025-12-05 09:37:50.703 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:50 localhost python3.9[234259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927469.9675834-740-222888603609091/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:51 localhost python3.9[234367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:51 localhost python3.9[234453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927470.9711504-740-186642835904596/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:52 localhost python3.9[234561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:52 localhost python3.9[234647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927472.0191417-740-43292611617964/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60673 DF PROTO=TCP SPT=50190 DPT=9101 SEQ=2778537623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50EA450000000001030307) Dec 5 04:37:53 localhost python3.9[234755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:53 localhost python3.9[234841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927473.0509648-740-208805643918428/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:54 localhost python3.9[234949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:37:55 localhost python3.9[235035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927474.1076295-740-153384506460255/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:37:55 localhost nova_compute[229251]: 2025-12-05 09:37:55.305 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:55 localhost nova_compute[229251]: 2025-12-05 09:37:55.706 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:37:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20978 DF PROTO=TCP SPT=39384 DPT=9102 SEQ=1314178801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC50F6450000000001030307) Dec 5 04:37:56 localhost python3.9[235145]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:37:57 localhost python3.9[235255]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:37:57 localhost systemd[1]: Reloading. Dec 5 04:37:58 localhost systemd-sysv-generator[235287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:37:58 localhost systemd-rc-local-generator[235280]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:37:58 localhost systemd[1]: Listening on Podman API Socket. Dec 5 04:37:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33992 DF PROTO=TCP SPT=42082 DPT=9100 SEQ=3489403445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5102450000000001030307) Dec 5 04:37:59 localhost python3.9[235405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:38:00 localhost python3.9[235493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927478.6687286-1256-163264109484644/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:38:00 localhost nova_compute[229251]: 2025-12-05 09:38:00.307 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:38:00 localhost python3.9[235548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:38:00 localhost nova_compute[229251]: 2025-12-05 09:38:00.733 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:38:01 localhost python3.9[235636]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927478.6687286-1256-163264109484644/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 5 04:38:02 localhost python3.9[235746]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Dec 5 04:38:03 localhost python3.9[235856]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:38:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:38:03.883 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:38:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:38:03.884 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:38:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:38:03.885 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:38:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:38:04 localhost podman[235966]: 2025-12-05 09:38:04.087692483 +0000 UTC m=+0.093473973 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:38:04 localhost podman[235966]: 2025-12-05 09:38:04.104818768 +0000 UTC m=+0.110600258 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 04:38:04 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:38:04 localhost python3[235967]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:38:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28704 DF PROTO=TCP SPT=35932 DPT=9105 SEQ=2936410983 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5115060000000001030307) Dec 5 04:38:04 localhost nova_compute[229251]: 2025-12-05 09:38:04.287 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:04 localhost nova_compute[229251]: 2025-12-05 09:38:04.287 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:04 localhost nova_compute[229251]: 2025-12-05 09:38:04.287 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:38:04 localhost nova_compute[229251]: 2025-12-05 09:38:04.288 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:38:04 localhost python3[235967]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",#012 "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:21:53.58682213Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505175293,#012 "VirtualSize": 505175293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",#012 "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Dec 5 04:38:04 localhost podman[236037]: 2025-12-05 09:38:04.646050291 +0000 UTC m=+0.076839573 container remove 21faa02c4037b3c61eeb7d2e32935872e7a145321aa4202b11257eed23be073d (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36d3201998d10321ffa6261c2854a42f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 5 04:38:04 localhost python3[235967]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Dec 5 04:38:04 localhost podman[236051]: Dec 5 04:38:04 localhost podman[236051]: 2025-12-05 09:38:04.748564075 +0000 UTC m=+0.081577065 container create 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 04:38:04 localhost podman[236051]: 2025-12-05 09:38:04.710365916 +0000 UTC m=+0.043378936 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 5 04:38:04 localhost python3[235967]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Dec 5 04:38:05 localhost nova_compute[229251]: 2025-12-05 09:38:05.309 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:38:05 localhost python3.9[236198]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:38:05 localhost nova_compute[229251]: 2025-12-05 09:38:05.739 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:38:06 localhost nova_compute[229251]: 2025-12-05 09:38:06.005 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:38:06 localhost nova_compute[229251]: 2025-12-05 09:38:06.006 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:38:06 localhost nova_compute[229251]: 2025-12-05 09:38:06.006 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:38:06 localhost nova_compute[229251]: 2025-12-05 09:38:06.006 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:38:06 localhost python3.9[236310]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:38:06 localhost python3.9[236419]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927486.364464-1448-245180860586130/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:38:07 localhost python3.9[236474]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:38:07 localhost systemd[1]: Reloading. Dec 5 04:38:07 localhost systemd-rc-local-generator[236496]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:38:07 localhost systemd-sysv-generator[236500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.025 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.056 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.057 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.058 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.059 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.059 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.060 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.061 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.061 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.062 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.062 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.088 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.089 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.089 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.089 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.089 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.534 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.602 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.603 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.842 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.844 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12906MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.844 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.845 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.908 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.908 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.909 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:38:08 localhost nova_compute[229251]: 2025-12-05 09:38:08.955 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:38:09 localhost python3.9[236586]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:38:09 localhost systemd[1]: Reloading. Dec 5 04:38:09 localhost systemd-sysv-generator[236636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:38:09 localhost systemd-rc-local-generator[236631]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:38:09 localhost nova_compute[229251]: 2025-12-05 09:38:09.449 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:38:09 localhost nova_compute[229251]: 2025-12-05 09:38:09.459 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:38:09 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 5 04:38:09 localhost nova_compute[229251]: 2025-12-05 09:38:09.476 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:38:09 localhost nova_compute[229251]: 2025-12-05 09:38:09.479 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:38:09 localhost nova_compute[229251]: 2025-12-05 09:38:09.479 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:38:09 localhost systemd[1]: Started libcrun container. Dec 5 04:38:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0059f8efe1ba54888794008ca6aebe0b39f572e0c9c611756949b945f13d23/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 5 04:38:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0059f8efe1ba54888794008ca6aebe0b39f572e0c9c611756949b945f13d23/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 5 04:38:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:38:09 localhost podman[236648]: 2025-12-05 09:38:09.620068187 +0000 UTC m=+0.137589641 container init 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + sudo -E kolla_set_configs Dec 5 04:38:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:38:09 localhost podman[236648]: 2025-12-05 09:38:09.648646556 +0000 UTC m=+0.166168000 container start 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 5 04:38:09 localhost podman[236648]: ceilometer_agent_compute Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: sudo: unable to send audit message: Operation not permitted Dec 5 04:38:09 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Validating config file Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Copying service configuration files Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: INFO:__main__:Writing out command to execute Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: ++ cat /run_command Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + ARGS= Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + sudo kolla_copy_cacerts Dec 5 04:38:09 localhost podman[236668]: 2025-12-05 09:38:09.738618644 +0000 UTC m=+0.086491194 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: sudo: unable to send audit message: Operation not permitted Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + [[ ! -n '' ]] Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + . kolla_extend_start Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + umask 0022 Dec 5 04:38:09 localhost ceilometer_agent_compute[236661]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 5 04:38:09 localhost podman[236668]: 2025-12-05 09:38:09.767354088 +0000 UTC m=+0.115226608 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 04:38:09 localhost podman[236668]: unhealthy Dec 5 04:38:09 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:38:09 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Failed with result 'exit-code'. Dec 5 04:38:10 localhost nova_compute[229251]: 2025-12-05 09:38:10.312 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.454 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.454 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.454 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.454 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.455 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.456 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.457 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.458 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.459 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.460 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.461 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.462 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.463 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.464 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.465 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.466 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.467 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.485 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.486 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.487 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.578 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.639 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.640 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.641 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.642 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.643 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.644 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.645 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.646 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.647 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.648 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.649 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.650 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.651 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.652 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.653 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.654 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.655 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.656 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.657 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.658 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.658 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.658 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.660 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 5 04:38:10 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:10.667 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 5 04:38:10 localhost nova_compute[229251]: 2025-12-05 09:38:10.742 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:38:11 localhost python3.9[236807]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.037 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fe255585262db4dbeeae58a99aba8346ddd516f48ab13c07b0a58de018f5c0e4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 5 04:38:11 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.130 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.143 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Fri, 05 Dec 2025 09:38:11 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-af0076e5-5511-4a20-bacc-88283834c406 x-openstack-request-id: req-af0076e5-5511-4a20-bacc-88283834c406 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.143 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "bb6181df-1ada-42c2-81f6-896f08302073", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/bb6181df-1ada-42c2-81f6-896f08302073"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/bb6181df-1ada-42c2-81f6-896f08302073"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.143 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-af0076e5-5511-4a20-bacc-88283834c406 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.145 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/bb6181df-1ada-42c2-81f6-896f08302073 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}fe255585262db4dbeeae58a99aba8346ddd516f48ab13c07b0a58de018f5c0e4" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.177 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Fri, 05 Dec 2025 09:38:11 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8806e7ed-aadc-4ee1-b002-dab8d7f602f1 x-openstack-request-id: req-8806e7ed-aadc-4ee1-b002-dab8d7f602f1 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.177 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "bb6181df-1ada-42c2-81f6-896f08302073", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/bb6181df-1ada-42c2-81f6-896f08302073"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/bb6181df-1ada-42c2-81f6-896f08302073"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.178 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/bb6181df-1ada-42c2-81f6-896f08302073 used request id req-8806e7ed-aadc-4ee1-b002-dab8d7f602f1 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.179 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.179 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.217 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.218 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36785 DF PROTO=TCP SPT=52434 DPT=9102 SEQ=1182475819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5130260000000001030307) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5cccbfc0-d90a-4787-a136-62b946ca1604', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.180066', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d304360-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'f78deba80ddb0a613aa07e3802bbef163e9b7b7bbafeb194f4a48dba1b2152aa'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.180066', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d305bde-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': '276d9a8ec632cdf42dbbc390c049f9785e076cbcb71253c058182efa789cda19'}]}, 'timestamp': '2025-12-05 09:38:11.218774', '_unique_id': '7646fdc1398c44ff92f99a7f0fe0ba7d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.225 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.229 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.231 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.231 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.231 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.241 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.241 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '633acab3-ed96-4023-b236-91cb9400e1ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.229438', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d33d778-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.40387076, 'message_signature': 'bbab250ddd383ede4f2f1a81c2a69f3c107d5993db925a1acb5d2269f41a9227'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.229438', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d33e966-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.40387076, 'message_signature': '3f6c6611b0a6b171dd5c59588bf8d9a39775cf785f058d1f8e1ecd0df60e8e46'}]}, 'timestamp': '2025-12-05 09:38:11.242035', '_unique_id': '1c916d05654c4faa81b3c2af35bcfe3d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.243 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.244 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.244 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.244 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.245 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.245 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.245 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '61a95444-d88f-4549-aced-d468b864c403', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.245364', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d347d2c-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'd2831ace59dbeb14d738fa194f2628f3e2c24317d1b44ef25acbbe8d06fe5895'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.245364', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d348dda-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'ee0794b20251c0aa6ab552bf8fa0223629db67fb199b21d7cd696c58f7a4d6c0'}]}, 'timestamp': '2025-12-05 09:38:11.246244', '_unique_id': '924ce4de44f14375b450a0f5928e10a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.247 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.248 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.248 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 525 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.249 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '07a7d635-bdb7-4ecb-afa8-92dd0c8a3119', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 525, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.248576', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d34fa2c-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': '8a0f2e3e8edf1f766fec1a389822178f5f1de8eb8196d46fb751d48ae8761340'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.248576', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d350a94-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': '7d75687b02b3d0a9dafdcb4fa12c9e5a244c6d9ed0e681e2d75a9122b85d62a3'}]}, 'timestamp': '2025-12-05 09:38:11.249514', '_unique_id': '04de20637e9a42969456a16acef96f92'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.250 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.251 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.251 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.252 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '32e09a11-f628-4be2-819b-cbc1462c08cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.251683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d357376-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'dd779a4b4d35d1571b332cf8642aabc13326dc627e17627c42c8be415a4c6361'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.251683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d35860e-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'f7c48b27f75497bb6aca5ca67e9bd8dfe712889ea4578f56911f1a27ab7750d0'}]}, 'timestamp': '2025-12-05 09:38:11.252596', '_unique_id': '6a5119e2dcc145298dcbafa4da4a0a3b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.253 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.254 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.254 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.254 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.255 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.255 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.255 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.255 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.258 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 96a47a1c-57c7-4bb1-aecc-33db976db8c7 / tapc2f95d81-23 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.259 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cce13477-0bd0-4044-82bc-bbdc894c19f3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.255918', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d36a282-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': '56c8f9243cdbbea9baebdb029b63fa28f3339cf7974e8774d29de80073196e29'}]}, 'timestamp': '2025-12-05 09:38:11.259911', '_unique_id': '5b6fd4b4e5734f2b9ece30878a7f2986'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.260 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.261 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.279 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 52.30859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '316ba0d9-7335-4db7-a202-1fb2ac5f2379', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.30859375, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:38:11.262124', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '1d39aba8-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.453219704, 'message_signature': 'ef48cb0c7838e42eabaf60f0b4e04de953e98f4741a8cde764761d3007b2de6e'}]}, 'timestamp': '2025-12-05 09:38:11.279787', '_unique_id': '8f0c355f5c9f49588d776103c89ed1dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.281 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.282 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.282 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 161823320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.282 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 27606506 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '15e464ab-688b-469f-8bb1-7e8e4f69a46a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 161823320, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.282317', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d3a2074-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'bd71422c267e98fad7d8e3e5a66c1e2527548f088fc54f4b887080b37cf20f80'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27606506, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.282317', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d3a312c-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': '6a9ee0dbba71e996c0ba8268beb4b146b2b4d8d903e279f9775339898287c4ea'}]}, 'timestamp': '2025-12-05 09:38:11.283184', '_unique_id': '4406446a9bd44d8f999839a104c7c1de'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.284 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.285 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.285 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.285 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.285 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.286 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 8783 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3c01ccda-da62-4a41-b68a-99e3ed89f219', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8783, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.285981', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3aafa8-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': 'aa0634179d86fb34c8353ef1611cce666c79a1379b9142c7ebfed8dad71f8e47'}]}, 'timestamp': '2025-12-05 09:38:11.286485', '_unique_id': 'cc4623aafaf54f6da110091e5f5cfd2b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.287 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.288 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.288 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cc0e2c23-6f5e-4be8-87e6-127bd429a8cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.288611', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3b1614-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': 'd4fc840c6d178cc7901928fac863fe83dd8ca32fadb44a454a463007088e8965'}]}, 'timestamp': '2025-12-05 09:38:11.289098', '_unique_id': '58ded81aadd9440fad49725ba492cfb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.290 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.291 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.291 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 56580000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ee9e3f5c-38db-419a-90c7-c41192f0ace0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 56580000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:38:11.291193', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '1d3b7c58-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.453219704, 'message_signature': '37f41161a899a73d4d53196ed20966d4da7bf091b0e54184b111630cc8fd7a7a'}]}, 'timestamp': '2025-12-05 09:38:11.291671', '_unique_id': 'c015d53b2e084931896108f7ecd4c345'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.292 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.293 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.293 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9b4f273a-e5c3-485c-9708-c0dc6ad129ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.293810', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3be120-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': 'eb4741efa6e4388d4e8bf7dae2fd8c8033e130d5d9341fbbda4e63eea420b99b'}]}, 'timestamp': '2025-12-05 09:38:11.294306', '_unique_id': '52dd5e9c60904998ae8375b3261b5d7d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.295 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.296 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.296 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1216962709 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.296 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 209749905 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd9f2174-9102-46c8-9da9-1f20f8737128', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1216962709, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.296410', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d3c467e-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': 'ff2ef60e9dc921ee28e81fe3b39289fb9cd40e2eb553c83315651f587e7cc627'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 209749905, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.296410', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d3c5790-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.354475344, 'message_signature': '781e7fdb6eb4948ecc03719dc77fedb075853fa65526cf76e391f43a94184fc6'}]}, 'timestamp': '2025-12-05 09:38:11.297306', '_unique_id': '80c0a1401e454d3bae2e4b2a785c3864'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.298 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.299 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.299 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '585463f6-869d-4f6d-a183-f34792713b4c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.299484', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3cbeec-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': 'ce7f358a10d227f6349f7ad2c66da1e17ae074edf3a2d06b2693b78f1a418fca'}]}, 'timestamp': '2025-12-05 09:38:11.299944', '_unique_id': '4c8cae4433a949cbb9d20291a6b9a86a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.300 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.301 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.302 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'baf1d17b-b258-46fe-a427-dabe649837d0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.302087', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3d2594-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': 'b16f3f6d645be4fd48834857a82e5aef08f3201bbbff895682e3298d22aef993'}]}, 'timestamp': '2025-12-05 09:38:11.302574', '_unique_id': '9a36e4f9825546128af9ac308af4c328'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.303 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.304 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.304 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.305 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bde7c4a2-319a-4d48-b729-70f799d0a446', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.304691', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d3d89e4-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.40387076, 'message_signature': '0954d20cdfb7c5a61b996bb871b5d2f5c30dabdbc1c761ba6d2060d9dc80c8ba'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.304691', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d3d9f42-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.40387076, 'message_signature': '937662abb1d4271e3b0f6de7a769abc72be4c4aa22f43f1bc8b16b6ddeda517d'}]}, 'timestamp': '2025-12-05 09:38:11.305669', '_unique_id': 'dbe931e1376544acb9fc0d192f4d1043'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.306 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.307 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.307 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e489ef4b-42e5-424b-94e4-b1a6f1836510', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.307841', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3e054a-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': '7695e88c912c056123f0a83a58a36f43a36ef92dd31cdcde37a53bbd86f82868'}]}, 'timestamp': '2025-12-05 09:38:11.308350', '_unique_id': 'b1cf695035fd4e37938274df847f1b4a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.309 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.310 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.310 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '207b2e17-171e-4dad-8ba1-1a5c6deeb883', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.310452', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3e6b2a-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': '81972c9e352cf183d6b56bbbd2ae99ac76fc2faa1bf6395ba991dcde5ddd56bf'}]}, 'timestamp': '2025-12-05 09:38:11.310905', '_unique_id': 'ac3de0e2641e4e1da21478a1f91f53a6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.311 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.312 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.313 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.313 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd7461e69-68bd-484f-a6e9-e1d324768bba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:38:11.313013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1d3eceee-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.40387076, 'message_signature': 'bac967c532df31b1ace620c10b32a2fc48741bc9ed25a53bb2eb948c1eb6d980'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:38:11.313013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1d3ee1f4-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.40387076, 'message_signature': '0506d449ba9308cc84cf320044b4c3732536875ba795bf724a38ae0cb8cdfeef'}]}, 'timestamp': '2025-12-05 09:38:11.313919', '_unique_id': '08171f4482fa490780ffe8abfe080759'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.314 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.315 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.315 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69efe7c0-645f-4f13-8d49-e997b80cb66f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.315368', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3f27fe-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': '6d1d4536a32b377cd503d58242d08a6c8b59cd260643fae366aeca1ba3d354dd'}]}, 'timestamp': '2025-12-05 09:38:11.315653', '_unique_id': 'c615f7696f6b496fb0c9c69012ec18a7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.316 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1a6a6d68-9155-4ed6-9388-7b3bedc6822f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:38:11.316947', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '1d3f65a2-d1be-11f0-abb6-fa163e982365', 'monotonic_time': 10606.430333446, 'message_signature': '94d702f572cbf9c388ad9b7dcea1c14fde22dabedd158207c4b5c2d98c8537a9'}]}, 'timestamp': '2025-12-05 09:38:11.317232', '_unique_id': '21e582a98878476e92443b93bbcbb83d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.317 12 ERROR oslo_messaging.notify.messaging Dec 5 04:38:11 localhost journal[202456]: End of file while reading data: Input/output error Dec 5 04:38:11 localhost journal[202456]: End of file while reading data: Input/output error Dec 5 04:38:11 localhost ceilometer_agent_compute[236661]: 2025-12-05 09:38:11.327 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Dec 5 04:38:11 localhost systemd[1]: libpod-6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.scope: Deactivated successfully. Dec 5 04:38:11 localhost systemd[1]: libpod-6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.scope: Consumed 1.313s CPU time. Dec 5 04:38:11 localhost podman[236811]: 2025-12-05 09:38:11.501494167 +0000 UTC m=+0.438908515 container died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute) Dec 5 04:38:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:38:11 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.timer: Deactivated successfully. Dec 5 04:38:11 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:38:11 localhost systemd[1]: tmp-crun.Jeezlv.mount: Deactivated successfully. Dec 5 04:38:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a-userdata-shm.mount: Deactivated successfully. Dec 5 04:38:11 localhost systemd[1]: var-lib-containers-storage-overlay-bc0059f8efe1ba54888794008ca6aebe0b39f572e0c9c611756949b945f13d23-merged.mount: Deactivated successfully. Dec 5 04:38:11 localhost podman[236831]: 2025-12-05 09:38:11.602534476 +0000 UTC m=+0.081581965 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller) Dec 5 04:38:11 localhost podman[236811]: 2025-12-05 09:38:11.607617449 +0000 UTC m=+0.545031727 container cleanup 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute) Dec 5 04:38:11 localhost podman[236811]: ceilometer_agent_compute Dec 5 04:38:11 localhost podman[236831]: 2025-12-05 09:38:11.689124971 +0000 UTC m=+0.168172410 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller) Dec 5 04:38:11 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:38:11 localhost podman[236860]: 2025-12-05 09:38:11.764694085 +0000 UTC m=+0.107488596 container cleanup 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 04:38:11 localhost podman[236860]: ceilometer_agent_compute Dec 5 04:38:11 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Dec 5 04:38:11 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 5 04:38:11 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 5 04:38:11 localhost systemd[1]: Started libcrun container. Dec 5 04:38:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0059f8efe1ba54888794008ca6aebe0b39f572e0c9c611756949b945f13d23/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 5 04:38:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc0059f8efe1ba54888794008ca6aebe0b39f572e0c9c611756949b945f13d23/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 5 04:38:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:38:11 localhost podman[236877]: 2025-12-05 09:38:11.914486121 +0000 UTC m=+0.120706003 container init 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 5 04:38:11 localhost ceilometer_agent_compute[236891]: + sudo -E kolla_set_configs Dec 5 04:38:11 localhost ceilometer_agent_compute[236891]: sudo: unable to send audit message: Operation not permitted Dec 5 04:38:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:38:11 localhost podman[236877]: 2025-12-05 09:38:11.942104122 +0000 UTC m=+0.148324004 container start 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 04:38:11 localhost podman[236877]: ceilometer_agent_compute Dec 5 04:38:11 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Validating config file Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Copying service configuration files Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: INFO:__main__:Writing out command to execute Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: ++ cat /run_command Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + ARGS= Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + sudo kolla_copy_cacerts Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: sudo: unable to send audit message: Operation not permitted Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + [[ ! -n '' ]] Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + . kolla_extend_start Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + umask 0022 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 5 04:38:12 localhost podman[236899]: 2025-12-05 09:38:12.045502462 +0000 UTC m=+0.096889976 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 04:38:12 localhost podman[236899]: 2025-12-05 09:38:12.077619698 +0000 UTC m=+0.129007182 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:38:12 localhost podman[236899]: unhealthy Dec 5 04:38:12 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:38:12 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Failed with result 'exit-code'. Dec 5 04:38:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36786 DF PROTO=TCP SPT=52434 DPT=9102 SEQ=1182475819 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5134460000000001030307) Dec 5 04:38:12 localhost python3.9[237031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.723 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.724 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.724 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.726 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.727 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.728 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.729 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.730 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.731 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.732 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.733 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.734 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.738 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.738 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.738 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.755 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.757 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.758 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.776 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 5 04:38:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20979 DF PROTO=TCP SPT=39384 DPT=9102 SEQ=1314178801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5136460000000001030307) Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.907 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.907 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.907 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.908 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.909 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.910 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.911 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.911 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.911 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.912 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.913 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.914 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.915 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.917 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.918 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.919 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.920 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.921 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.922 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.923 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.924 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:38:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:38:12.925 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 5 04:41:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:41:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:41:39 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:41:39 localhost rsyslogd[758]: imjournal: 4205 messages lost due to rate-limiting (20000 allowed within 600 seconds) Dec 5 04:41:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:41:39 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:41:39 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:41:40 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:41:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:41:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:41:41 localhost nova_compute[229251]: 2025-12-05 09:41:41.197 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24986 DF PROTO=TCP SPT=54306 DPT=9102 SEQ=1735280264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5464760000000001030307) Dec 5 04:41:41 localhost nova_compute[229251]: 2025-12-05 09:41:41.239 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:41 localhost nova_compute[229251]: 2025-12-05 09:41:41.239 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5043 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:41:41 localhost nova_compute[229251]: 2025-12-05 09:41:41.240 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:41 localhost nova_compute[229251]: 2025-12-05 09:41:41.241 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:41 localhost nova_compute[229251]: 2025-12-05 09:41:41.242 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:41:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:41:41 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24987 DF PROTO=TCP SPT=54306 DPT=9102 SEQ=1735280264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5468860000000001030307) Dec 5 04:41:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6287 DF PROTO=TCP SPT=34170 DPT=9105 SEQ=2561075821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC546A450000000001030307) Dec 5 04:41:42 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 5 04:41:42 localhost systemd[1]: var-lib-containers-storage-overlay-3ef0e92ae5a4c8a141a5a4f63d7b99ab996c091b6eadea29d58846d2b7054c3f-merged.mount: Deactivated successfully. Dec 5 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 5 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 5 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:41:43 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 5 04:41:44 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Dec 5 04:41:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8619 DF PROTO=TCP SPT=41856 DPT=9100 SEQ=824762789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5470570000000001030307) Dec 5 04:41:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:41:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:41:44 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:41:44 localhost systemd[1]: var-lib-containers-storage-overlay-026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca-merged.mount: Deactivated successfully. Dec 5 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:41:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 5 04:41:45 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 5 04:41:45 localhost podman[245967]: 2025-12-05 09:41:45.978971726 +0000 UTC m=+0.110297349 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 5 04:41:45 localhost podman[245967]: 2025-12-05 09:41:45.989566961 +0000 UTC m=+0.120892584 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 5 04:41:46 localhost podman[245966]: 2025-12-05 09:41:45.943031199 +0000 UTC m=+0.082605406 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, tcib_managed=true) Dec 5 04:41:46 localhost podman[245966]: 2025-12-05 09:41:46.07198555 +0000 UTC m=+0.211559747 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 5 04:41:46 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:41:46 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:41:46 localhost nova_compute[229251]: 2025-12-05 09:41:46.243 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:46 localhost nova_compute[229251]: 2025-12-05 09:41:46.245 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:46 localhost nova_compute[229251]: 2025-12-05 09:41:46.245 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:41:46 localhost nova_compute[229251]: 2025-12-05 09:41:46.245 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:46 localhost nova_compute[229251]: 2025-12-05 09:41:46.292 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:41:46 localhost nova_compute[229251]: 2025-12-05 09:41:46.292 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Dec 5 04:41:46 localhost systemd[1]: var-lib-containers-storage-overlay-d026a35085ad52897bc3374aed756aa80ec1ba6263c403478a100f8c0e75ebbb-merged.mount: Deactivated successfully. Dec 5 04:41:47 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:41:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8621 DF PROTO=TCP SPT=41856 DPT=9100 SEQ=824762789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC547C450000000001030307) Dec 5 04:41:47 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 5 04:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:41:48 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 5 04:41:50 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:41:50 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 5 04:41:50 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 5 04:41:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4887 DF PROTO=TCP SPT=37886 DPT=9101 SEQ=1248063907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5488450000000001030307) Dec 5 04:41:51 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:41:51 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:41:51 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:41:51 localhost nova_compute[229251]: 2025-12-05 09:41:51.293 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:51 localhost nova_compute[229251]: 2025-12-05 09:41:51.295 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:51 localhost nova_compute[229251]: 2025-12-05 09:41:51.295 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:41:51 localhost nova_compute[229251]: 2025-12-05 09:41:51.295 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:51 localhost nova_compute[229251]: 2025-12-05 09:41:51.326 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:41:51 localhost nova_compute[229251]: 2025-12-05 09:41:51.327 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:51 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:41:52 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:41:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:41:52 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:41:52 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:41:53 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 5 04:41:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39394 DF PROTO=TCP SPT=41396 DPT=9101 SEQ=4115802582 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5494450000000001030307) Dec 5 04:41:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:41:54 localhost systemd[1]: tmp-crun.Ap5tWQ.mount: Deactivated successfully. Dec 5 04:41:54 localhost podman[246004]: 2025-12-05 09:41:54.186918673 +0000 UTC m=+0.078398956 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:41:54 localhost podman[246004]: 2025-12-05 09:41:54.193801964 +0000 UTC m=+0.085282217 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:41:54 localhost podman[246004]: unhealthy Dec 5 04:41:55 localhost systemd[1]: var-lib-containers-storage-overlay-026dbc3517a98b7784621079f86e764989200b5b8b8b9147585f030d058790ca-merged.mount: Deactivated successfully. Dec 5 04:41:55 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:41:55 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:41:55 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:41:55 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Failed with result 'exit-code'. Dec 5 04:41:56 localhost nova_compute[229251]: 2025-12-05 09:41:56.327 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:56 localhost nova_compute[229251]: 2025-12-05 09:41:56.329 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:41:56 localhost nova_compute[229251]: 2025-12-05 09:41:56.329 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:41:56 localhost nova_compute[229251]: 2025-12-05 09:41:56.329 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:56 localhost nova_compute[229251]: 2025-12-05 09:41:56.362 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:41:56 localhost nova_compute[229251]: 2025-12-05 09:41:56.363 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:41:56 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:41:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4889 DF PROTO=TCP SPT=37886 DPT=9101 SEQ=1248063907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54A0050000000001030307) Dec 5 04:41:57 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:41:57 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:41:57 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:41:58 localhost podman[246098]: 2025-12-05 09:41:58.446543625 +0000 UTC m=+0.083488383 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 04:41:58 localhost podman[246098]: 2025-12-05 09:41:58.456974246 +0000 UTC m=+0.093918994 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 04:41:58 localhost systemd[1]: tmp-crun.Uep7m1.mount: Deactivated successfully. Dec 5 04:41:58 localhost podman[246096]: 2025-12-05 09:41:58.496519214 +0000 UTC m=+0.136061482 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 5 04:41:58 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:41:58 localhost podman[246096]: 2025-12-05 09:41:58.525813686 +0000 UTC m=+0.165355974 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 04:41:58 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:41:58 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 5 04:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:41:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8623 DF PROTO=TCP SPT=41856 DPT=9100 SEQ=824762789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54AC460000000001030307) Dec 5 04:41:59 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:00 localhost podman[246097]: 2025-12-05 09:42:00.364447134 +0000 UTC m=+2.006404316 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:42:00 localhost podman[246097]: 2025-12-05 09:42:00.403533798 +0000 UTC m=+2.045490980 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:42:00 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:42:01 localhost nova_compute[229251]: 2025-12-05 09:42:01.363 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:01 localhost nova_compute[229251]: 2025-12-05 09:42:01.365 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:01 localhost nova_compute[229251]: 2025-12-05 09:42:01.365 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:01 localhost nova_compute[229251]: 2025-12-05 09:42:01.365 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:01 localhost nova_compute[229251]: 2025-12-05 09:42:01.401 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:01 localhost nova_compute[229251]: 2025-12-05 09:42:01.402 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 5 04:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 5 04:42:01 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 5 04:42:02 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:02 localhost systemd[1]: var-lib-containers-storage-overlay-e83f227a3727657fa28ad038d3faa4acf2257033c4c4d3daa819bb5632ffbdd2-merged.mount: Deactivated successfully. Dec 5 04:42:02 localhost systemd[1]: var-lib-containers-storage-overlay-e83f227a3727657fa28ad038d3faa4acf2257033c4c4d3daa819bb5632ffbdd2-merged.mount: Deactivated successfully. Dec 5 04:42:02 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:42:03 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:42:03 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 5 04:42:03 localhost systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully. Dec 5 04:42:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:42:03.889 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:42:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:42:03.890 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:42:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:42:03.891 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:42:04 localhost nova_compute[229251]: 2025-12-05 09:42:04.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:04 localhost nova_compute[229251]: 2025-12-05 09:42:04.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 04:42:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49578 DF PROTO=TCP SPT=32866 DPT=9105 SEQ=4274879410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54BE850000000001030307) Dec 5 04:42:04 localhost nova_compute[229251]: 2025-12-05 09:42:04.288 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 04:42:04 localhost nova_compute[229251]: 2025-12-05 09:42:04.289 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:04 localhost nova_compute[229251]: 2025-12-05 09:42:04.289 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 04:42:04 localhost nova_compute[229251]: 2025-12-05 09:42:04.301 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:42:04 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:42:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61255 DF PROTO=TCP SPT=56790 DPT=9882 SEQ=853590825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54C2450000000001030307) Dec 5 04:42:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:42:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 4751 writes, 21K keys, 4751 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4751 writes, 573 syncs, 8.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:42:05 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.315 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.316 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.402 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.430 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.431 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5028 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.431 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.437 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:06 localhost nova_compute[229251]: 2025-12-05 09:42:06.438 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:06 localhost systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully. Dec 5 04:42:06 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:07 localhost nova_compute[229251]: 2025-12-05 09:42:07.271 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:07 localhost nova_compute[229251]: 2025-12-05 09:42:07.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:42:07 localhost nova_compute[229251]: 2025-12-05 09:42:07.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:42:07 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:42:08 localhost systemd[1]: tmp-crun.kUYAav.mount: Deactivated successfully. Dec 5 04:42:08 localhost podman[246172]: 2025-12-05 09:42:08.194386049 +0000 UTC m=+0.086080983 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:42:08 localhost podman[246172]: 2025-12-05 09:42:08.229688865 +0000 UTC m=+0.121383809 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:42:08 localhost nova_compute[229251]: 2025-12-05 09:42:08.263 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:42:08 localhost nova_compute[229251]: 2025-12-05 09:42:08.263 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:42:08 localhost nova_compute[229251]: 2025-12-05 09:42:08.264 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:42:08 localhost nova_compute[229251]: 2025-12-05 09:42:08.264 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:42:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:08 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:42:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5843 writes, 832 syncs, 7.02 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:42:09 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:09 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:09 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:42:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.408 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.430 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.431 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.431 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.431 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.432 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.432 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:09 localhost nova_compute[229251]: 2025-12-05 09:42:09.432 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:42:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.290 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.291 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.291 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.291 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.292 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:42:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.728 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.796 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.796 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.912 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.913 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12401MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.913 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:42:10 localhost nova_compute[229251]: 2025-12-05 09:42:10.913 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.014 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.014 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.014 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.073 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.134 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.134 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.153 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:42:11 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.173 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:42:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43999 DF PROTO=TCP SPT=43176 DPT=9102 SEQ=2084896158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54D9A60000000001030307) Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.213 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:42:11 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.439 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.440 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.440 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.440 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.487 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.488 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.630 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.637 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.655 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.657 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:42:11 localhost nova_compute[229251]: 2025-12-05 09:42:11.657 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:42:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44000 DF PROTO=TCP SPT=43176 DPT=9102 SEQ=2084896158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54DDC50000000001030307) Dec 5 04:42:12 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.940 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.941 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.960 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 58610000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14d459e5-cebe-4793-bafd-ae0d1bc6bfe8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 58610000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:42:12.941775', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ad4755e2-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.134562627, 'message_signature': 'edc9d9b5cbf91f29868257dd106a4014cfdb45ca0105faffdfcc3f7db1ae2cbe'}]}, 'timestamp': '2025-12-05 09:42:12.961228', '_unique_id': '4bc22d13202b40da99626738b02435b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 52.30859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '885d49ab-e0cb-4dce-9b14-bb10e6492b33', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.30859375, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:42:12.963101', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'ad47ac36-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.134562627, 'message_signature': 'b7d29be6093491580c92276c829f4a1d64b6233338dd242479e0f2507637fe2a'}]}, 'timestamp': '2025-12-05 09:42:12.963384', '_unique_id': '6d2ee0204e4b45b881d3834de0df60c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.964 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.969 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16914005-594b-4174-8f14-a16d827ece90', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:12.964571', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad48b374-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '305bf049a95f5d4b7c136b3ec7296ebd824292568e5a71a64e6b451cf221bdbe'}]}, 'timestamp': '2025-12-05 09:42:12.970161', '_unique_id': '06b23bb5c1534c9ebaa1c2504d1bd0f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.971 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b6b7246e-90a5-4da9-9949-314a883cc70d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:12.971897', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad49054a-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '9eb70e321099010cab31adf6a274dd91979f88e1073835088e01df0612dacf1e'}]}, 'timestamp': '2025-12-05 09:42:12.972179', '_unique_id': 'dbdcfc9783a14912a3e1f1617744ce0e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.972 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.973 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.973 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '334c7dee-26b3-4e7e-a38e-3f2c973f8bd8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:12.973443', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad493fce-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '8938f1cdaac48f9439c15cdc07c6229104fbf217efbe03f8d619ffb5fe74be43'}]}, 'timestamp': '2025-12-05 09:42:12.973723', '_unique_id': '63347ccad38a49e0947589a867b59be8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.974 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b7a60314-88aa-4460-9615-b7fe9a71fdf6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:12.974979', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad497bb0-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '26eab3695f986ebdad967ec2f520f7aba4500f44feb3ef1901659403e466ccd5'}]}, 'timestamp': '2025-12-05 09:42:12.975217', '_unique_id': '80fcff495899468ca84e40d4fc667003'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.975 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.976 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.989 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.990 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '258270c2-5edf-4648-b0c3-ac0245e41d43', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:12.976236', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad4bc5d2-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.150549779, 'message_signature': 'd0134f5999fb039fc386e5998e3f9a1630eab1a18fef47cc696ec1592ed4f057'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:12.976236', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad4bd202-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.150549779, 'message_signature': 'c962b0c4a44c07eedb84760b5c01dd4de7f9585731a39c4ea4337737e2f2ae40'}]}, 'timestamp': '2025-12-05 09:42:12.990523', '_unique_id': 'ddcfbad6c095453d86a8714c702a3146'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.991 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9709902f-ef34-4dc4-970d-2e175c0fc29c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:12.992036', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad4c1636-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '0ed04510d523dd683bafcdb7143508ab22a24f5f2066cb5517b17cb2c238dbae'}]}, 'timestamp': '2025-12-05 09:42:12.992292', '_unique_id': '2fe80cbd06714bf6ae396ed45e2ce194'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:12.993 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 525 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.014 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c19a55a1-dfc4-485c-a9e8-b94abe2cdae8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 525, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:12.993423', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad4f7196-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': 'd757ca16abd085443c875edfa9cd45ccfc8aa8268030883fb167eed68eeeb353'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:12.993423', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad4f7c22-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': 'afc10387396effabaf01998738984f85e0aa7e5822875749b3a6ebabaa5ff2f5'}]}, 'timestamp': '2025-12-05 09:42:13.014534', '_unique_id': 'a63def73fd8e4f3584b2e94edf43642e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.016 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.016 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 8783 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'acc18eba-b39d-41a5-ab24-ed1b8233ac26', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8783, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:13.016333', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad4fcd08-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '7ffc817f5930e649c0539f3ac46b931663879a528041847d19368eb1ac614fca'}]}, 'timestamp': '2025-12-05 09:42:13.016619', '_unique_id': 'f1e6b78de0d64c7692c119bedab0cfb8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.017 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3bb0d433-8f6b-4a88-8057-21b208b3e46d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.017820', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad50049e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': 'd99d1f7b0d3f696fb9799d483b3a6f71355a3e6fa8bfac3f4ee9272e9dd2c37c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.017820', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad500c78-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '1af9a2d97649b59f659f1446ab494ea0ca9f061903ecdf6ce9b912fe5d4a7dec'}]}, 'timestamp': '2025-12-05 09:42:13.018221', '_unique_id': '7c33b391551a488d8aaf971b101a5210'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.019 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.019 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.019 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9fd3ab4b-4c54-4802-b37b-0e4941251e40', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.019328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad503fae-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '52f11d8985748457917453ee3775b3d185e63b9501e85d0e74bd271a00314418'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.019328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad504760-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': 'b927a3348999040c33367cf1d12ec49f44faa3ab193425f14cc159c4f74d0586'}]}, 'timestamp': '2025-12-05 09:42:13.019729', '_unique_id': '1ec167fa5113425d9d66bd85240f7426'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.020 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1216962709 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 209749905 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '41fec6cc-2a6e-4fa9-885b-90eab59f8dcd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1216962709, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.020796', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad5078fc-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '4a41f38e6e7bf11f7a675a0476f70b6d92137e1bce2f4c78f72551f966041ed9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 209749905, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.020796', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad50807c-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '78f3f388e20de82652fa3f4753113b03cb1d922ab9ad6caefebda4f4f33b2f4d'}]}, 'timestamp': '2025-12-05 09:42:13.021194', '_unique_id': 'abee843193a446dfb0501bd097d20502'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.022 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 161823320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 27606506 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '067d68ea-e2d3-4c54-80c7-ac836bfafb5e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 161823320, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.022331', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad50b556-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': 'dcd3323343bd37e826dc5bee09b4fd0ce400e5bb4fb849db00af28e175831705'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27606506, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.022331', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad50bd26-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '3588d1d912540e794d1f3f9b1eb839435667870913fc85601d967a6392475223'}]}, 'timestamp': '2025-12-05 09:42:13.022752', '_unique_id': '1541546ab3484e8d98c9c3b6efe65e4c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.023 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '54112e76-58ca-4bea-a508-9a8b706e5ce9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.023835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad50f052-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.150549779, 'message_signature': '63f7f59a555551bcf38a9f940a2f53dd253b1f364747986ba0fc9defe7c91091'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.023835', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad50f8a4-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.150549779, 'message_signature': 'a398530451a5ba1804e281071cd62df1738e4bbdf24fb891b5834173492de94a'}]}, 'timestamp': '2025-12-05 09:42:13.024302', '_unique_id': 'aa05eab30e194615a9433a86cee609a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.029 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8b2ed027-94e7-412b-ba55-9d005be89358', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:13.029395', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad51c950-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '6d5688c98ff966b91d99501a2c0f8f1f42fd488f3efd387a445188e94871cf38'}]}, 'timestamp': '2025-12-05 09:42:13.029625', '_unique_id': 'afffd3d33c6945e29d0588b620cba63b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '674c18ec-c322-4846-9b21-cd193be69c55', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:13.030659', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad51fa38-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '59f556aba0f2f58cc7556ac983a96c1e39c820ea4e938328d2cc1eef5fba2fab'}]}, 'timestamp': '2025-12-05 09:42:13.030870', '_unique_id': 'ba221138f3384a23be4a7b42cead2ffc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.031 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '44bf698d-5654-4fea-93d2-ac953251f193', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.031868', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad522990-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.150549779, 'message_signature': 'e3b71276a071437b70dfa720211398361787ed6bac999e373f1a2f01eee82e7f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.031868', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad52312e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.150549779, 'message_signature': 'a45c8df514d4157438696b023144d7772254632d2b1b2f884a9bb9c3319e5924'}]}, 'timestamp': '2025-12-05 09:42:13.032307', '_unique_id': '4dc91283d990490ab54d12e3bc4695d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e73aaf0a-0769-4961-931e-ae1b5a1a3922', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:13.033513', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad526a0e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '9e3b6c1986f420e741b7592fe26bdd5bce0e3ceb3a8635b424dd1c03ac1aaedc'}]}, 'timestamp': '2025-12-05 09:42:13.033742', '_unique_id': '65d4524087b5400d9a9087958e29fc44'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.035 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.035 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.035 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac8e6282-9fe1-450f-bdf5-33ae67edab12', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:42:13.035663', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'ad52bdd8-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.13886976, 'message_signature': '8aea09ed8f83b4c332f602905ad46e79a7e79528347ccce3af22645d2d641396'}]}, 'timestamp': '2025-12-05 09:42:13.035885', '_unique_id': 'bdf7e95c8699485fa3afe18e157a6f5e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '81fbc215-e4b7-4dcf-a9fd-60db503a9603', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:42:13.036964', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad52f096-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '21b50109c60852ba00a119bf49ab261e1d9f0b4763ab66674ee0668302219ea4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:42:13.036964', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad52f8a2-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10848.167708588, 'message_signature': '78e8f2b7ae3785744cc7092cbedcaefe6291c6660557f5253a643588996a6d6d'}]}, 'timestamp': '2025-12-05 09:42:13.037372', '_unique_id': 'cbaf175e84414f20a60ae4fdd82ce3ec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:42:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:42:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:42:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:13 localhost podman[239519]: time="2025-12-05T09:42:13Z" level=error msg="Getting root fs size for \"6f418ba5f23418820f862c414d92c2cf87336cd9e68f199ee0995fc52caffdf0\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy" Dec 5 04:42:13 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:13 localhost systemd[1]: var-lib-containers-storage-overlay-a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104-merged.mount: Deactivated successfully. Dec 5 04:42:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:13 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:13 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19543 DF PROTO=TCP SPT=60344 DPT=9100 SEQ=1070699597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54E5860000000001030307) Dec 5 04:42:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:15 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:15 localhost systemd[1]: var-lib-containers-storage-overlay-e83f227a3727657fa28ad038d3faa4acf2257033c4c4d3daa819bb5632ffbdd2-merged.mount: Deactivated successfully. Dec 5 04:42:15 localhost systemd[1]: var-lib-containers-storage-overlay-e83f227a3727657fa28ad038d3faa4acf2257033c4c4d3daa819bb5632ffbdd2-merged.mount: Deactivated successfully. Dec 5 04:42:16 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:16 localhost systemd[1]: var-lib-containers-storage-overlay-b49c0bfdfd68cdef0107452428014cc1318b2cf8507d683119189823e098b7e5-merged.mount: Deactivated successfully. Dec 5 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:42:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:42:16 localhost podman[246240]: 2025-12-05 09:42:16.412978253 +0000 UTC m=+0.051954751 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41) Dec 5 04:42:16 localhost podman[246240]: 2025-12-05 09:42:16.421447274 +0000 UTC m=+0.060423782 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 04:42:16 localhost nova_compute[229251]: 2025-12-05 09:42:16.488 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:16 localhost nova_compute[229251]: 2025-12-05 09:42:16.490 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:16 localhost nova_compute[229251]: 2025-12-05 09:42:16.490 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:16 localhost nova_compute[229251]: 2025-12-05 09:42:16.490 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:16 localhost nova_compute[229251]: 2025-12-05 09:42:16.522 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:16 localhost nova_compute[229251]: 2025-12-05 09:42:16.523 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19545 DF PROTO=TCP SPT=60344 DPT=9100 SEQ=1070699597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54F1850000000001030307) Dec 5 04:42:17 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:18 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:42:18 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:18 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:18 localhost podman[246239]: 2025-12-05 09:42:18.356824191 +0000 UTC m=+1.997429070 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 04:42:18 localhost podman[246239]: 2025-12-05 09:42:18.391685825 +0000 UTC m=+2.032290744 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 04:42:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:20 localhost podman[239519]: time="2025-12-05T09:42:20Z" level=error msg="Getting root fs size for \"77622834fd11ae3639de60126b3082de3736f43714a47c94e296f3d166339fe8\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy" Dec 5 04:42:20 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:20 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:20 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:42:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17208 DF PROTO=TCP SPT=54892 DPT=9101 SEQ=4029102854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC54FD850000000001030307) Dec 5 04:42:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:21 localhost nova_compute[229251]: 2025-12-05 09:42:21.523 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:21 localhost nova_compute[229251]: 2025-12-05 09:42:21.525 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:21 localhost nova_compute[229251]: 2025-12-05 09:42:21.525 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:21 localhost nova_compute[229251]: 2025-12-05 09:42:21.525 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:21 localhost nova_compute[229251]: 2025-12-05 09:42:21.570 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:21 localhost nova_compute[229251]: 2025-12-05 09:42:21.571 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:22 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:23 localhost systemd[1]: var-lib-containers-storage-overlay-cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8-merged.mount: Deactivated successfully. Dec 5 04:42:23 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:42:23 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 5 04:42:23 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 5 04:42:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23531 DF PROTO=TCP SPT=35172 DPT=9882 SEQ=3097233164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC550A450000000001030307) Dec 5 04:42:24 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 5 04:42:25 localhost systemd[1]: var-lib-containers-storage-overlay-a5a1ffd2a50d56761b34f05757b6aa62c392850ba0c772dedfb7a28c75dd0104-merged.mount: Deactivated successfully. Dec 5 04:42:25 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:25 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 5 04:42:25 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 5 04:42:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:42:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:25 localhost podman[246276]: 2025-12-05 09:42:25.901703336 +0000 UTC m=+0.048083992 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:42:25 localhost podman[246276]: 2025-12-05 09:42:25.937593582 +0000 UTC m=+0.083974248 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:42:25 localhost podman[246276]: unhealthy Dec 5 04:42:26 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:26 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17210 DF PROTO=TCP SPT=54892 DPT=9101 SEQ=4029102854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5515450000000001030307) Dec 5 04:42:26 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:42:26 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Failed with result 'exit-code'. Dec 5 04:42:26 localhost nova_compute[229251]: 2025-12-05 09:42:26.572 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:26 localhost nova_compute[229251]: 2025-12-05 09:42:26.574 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:26 localhost nova_compute[229251]: 2025-12-05 09:42:26.574 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:26 localhost nova_compute[229251]: 2025-12-05 09:42:26.574 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:26 localhost nova_compute[229251]: 2025-12-05 09:42:26.616 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:26 localhost nova_compute[229251]: 2025-12-05 09:42:26.617 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:27 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 5 04:42:27 localhost systemd[1]: var-lib-containers-storage-overlay-61edff4e636e5b27fc65056d8e3af9182b499697a247cb1b23f64e881b58c13e-merged.mount: Deactivated successfully. Dec 5 04:42:27 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 5 04:42:27 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:42:27 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:28 localhost systemd[1]: var-lib-containers-storage-overlay-b49c0bfdfd68cdef0107452428014cc1318b2cf8507d683119189823e098b7e5-merged.mount: Deactivated successfully. Dec 5 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:42:29 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:42:29 localhost systemd[1]: tmp-crun.FToGv9.mount: Deactivated successfully. Dec 5 04:42:29 localhost podman[246298]: 2025-12-05 09:42:29.187237537 +0000 UTC m=+0.080469669 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 04:42:29 localhost podman[246299]: 2025-12-05 09:42:29.241283392 +0000 UTC m=+0.129067286 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:42:29 localhost podman[246299]: 2025-12-05 09:42:29.25259127 +0000 UTC m=+0.140375224 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Dec 5 04:42:29 localhost podman[246298]: 2025-12-05 09:42:29.268084426 +0000 UTC m=+0.161316598 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 04:42:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19547 DF PROTO=TCP SPT=60344 DPT=9100 SEQ=1070699597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5522450000000001030307) Dec 5 04:42:30 localhost systemd[1]: tmp-crun.Qx8Vhi.mount: Deactivated successfully. Dec 5 04:42:31 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:31 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:31 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:31 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:42:31 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:31 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:31 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:42:31 localhost nova_compute[229251]: 2025-12-05 09:42:31.618 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:31 localhost nova_compute[229251]: 2025-12-05 09:42:31.620 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:31 localhost nova_compute[229251]: 2025-12-05 09:42:31.620 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:31 localhost nova_compute[229251]: 2025-12-05 09:42:31.620 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:31 localhost nova_compute[229251]: 2025-12-05 09:42:31.658 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:31 localhost nova_compute[229251]: 2025-12-05 09:42:31.659 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:42:33 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:34 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:34 localhost podman[239519]: time="2025-12-05T09:42:34Z" level=error msg="Getting root fs size for \"86a5bb568f03cd328e2cf8a582e76a0d021251c9b844c126dbf5c190f5239479\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy" Dec 5 04:42:34 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:34 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:34 localhost podman[246335]: 2025-12-05 09:42:34.033663456 +0000 UTC m=+0.927816341 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 04:42:34 localhost podman[246335]: 2025-12-05 09:42:34.147860817 +0000 UTC m=+1.042013752 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:42:34 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24353 DF PROTO=TCP SPT=51326 DPT=9105 SEQ=3491206127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5533C50000000001030307) Dec 5 04:42:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22 DF PROTO=TCP SPT=41790 DPT=9882 SEQ=2253871229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5538460000000001030307) Dec 5 04:42:36 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:36 localhost systemd[1]: var-lib-containers-storage-overlay-7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f-merged.mount: Deactivated successfully. Dec 5 04:42:36 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:36 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:36 localhost systemd[1]: var-lib-containers-storage-overlay-7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f-merged.mount: Deactivated successfully. Dec 5 04:42:36 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:42:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:36 localhost nova_compute[229251]: 2025-12-05 09:42:36.660 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:36 localhost nova_compute[229251]: 2025-12-05 09:42:36.662 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:36 localhost nova_compute[229251]: 2025-12-05 09:42:36.662 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:36 localhost nova_compute[229251]: 2025-12-05 09:42:36.662 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:36 localhost nova_compute[229251]: 2025-12-05 09:42:36.688 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:36 localhost nova_compute[229251]: 2025-12-05 09:42:36.689 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:37 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:42:39 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 5 04:42:39 localhost podman[246359]: 2025-12-05 09:42:39.459313895 +0000 UTC m=+0.098071554 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:42:39 localhost podman[246359]: 2025-12-05 09:42:39.488286094 +0000 UTC m=+0.127043783 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:42:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:40 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:41 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:41 localhost systemd[1]: var-lib-containers-storage-overlay-cfad65d42b0cbb79e9d33fc40f5cc7671e93233d141fb70d7048f15080a51bd8-merged.mount: Deactivated successfully. Dec 5 04:42:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28476 DF PROTO=TCP SPT=50332 DPT=9102 SEQ=2930530576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC554ED60000000001030307) Dec 5 04:42:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:41 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:42:41 localhost nova_compute[229251]: 2025-12-05 09:42:41.690 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:41 localhost nova_compute[229251]: 2025-12-05 09:42:41.692 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:41 localhost nova_compute[229251]: 2025-12-05 09:42:41.692 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:41 localhost nova_compute[229251]: 2025-12-05 09:42:41.692 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:41 localhost nova_compute[229251]: 2025-12-05 09:42:41.722 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:41 localhost nova_compute[229251]: 2025-12-05 09:42:41.723 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:41 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28477 DF PROTO=TCP SPT=50332 DPT=9102 SEQ=2930530576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5552C50000000001030307) Dec 5 04:42:42 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:42:42 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:42:42 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:42:42 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:43 localhost systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully. Dec 5 04:42:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:43 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 5 04:42:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57659 DF PROTO=TCP SPT=35224 DPT=9100 SEQ=2682353424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC555AB70000000001030307) Dec 5 04:42:44 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:45 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 5 04:42:45 localhost systemd[1]: var-lib-containers-storage-overlay-61edff4e636e5b27fc65056d8e3af9182b499697a247cb1b23f64e881b58c13e-merged.mount: Deactivated successfully. Dec 5 04:42:45 localhost systemd[1]: var-lib-containers-storage-overlay-61edff4e636e5b27fc65056d8e3af9182b499697a247cb1b23f64e881b58c13e-merged.mount: Deactivated successfully. Dec 5 04:42:45 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 5 04:42:45 localhost systemd[1]: var-lib-containers-storage-overlay-a1f7fa99b1a308b31b9c9168cd0ceb32b0fd11de133cd67585d173dee9412861-merged.mount: Deactivated successfully. Dec 5 04:42:46 localhost nova_compute[229251]: 2025-12-05 09:42:46.723 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:46 localhost nova_compute[229251]: 2025-12-05 09:42:46.725 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:46 localhost nova_compute[229251]: 2025-12-05 09:42:46.725 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:46 localhost nova_compute[229251]: 2025-12-05 09:42:46.725 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:46 localhost nova_compute[229251]: 2025-12-05 09:42:46.776 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:46 localhost nova_compute[229251]: 2025-12-05 09:42:46.777 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:46 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:46 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 5 04:42:46 localhost systemd[1]: session-55.scope: Deactivated successfully. Dec 5 04:42:46 localhost systemd[1]: session-55.scope: Consumed 1min 14.994s CPU time. Dec 5 04:42:46 localhost systemd-logind[760]: Session 55 logged out. Waiting for processes to exit. Dec 5 04:42:46 localhost systemd-logind[760]: Removed session 55. Dec 5 04:42:47 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57661 DF PROTO=TCP SPT=35224 DPT=9100 SEQ=2682353424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5566C50000000001030307) Dec 5 04:42:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:47 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:42:48 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:42:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:42:48 localhost systemd[1]: var-lib-containers-storage-overlay-0a6ced32e5cb1e91cad73934204d8f5cbbf79aafab7ac712dfe07034e18c0d6e-merged.mount: Deactivated successfully. Dec 5 04:42:48 localhost podman[246382]: 2025-12-05 09:42:48.516852775 +0000 UTC m=+0.099744416 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container) Dec 5 04:42:48 localhost podman[246382]: 2025-12-05 09:42:48.557523017 +0000 UTC m=+0.140414588 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64) Dec 5 04:42:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9514 DF PROTO=TCP SPT=54016 DPT=9101 SEQ=430175687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5572C50000000001030307) Dec 5 04:42:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:42:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:50 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:42:50 localhost podman[246400]: 2025-12-05 09:42:50.819337459 +0000 UTC m=+0.364245604 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:42:50 localhost podman[246400]: 2025-12-05 09:42:50.858724203 +0000 UTC m=+0.403632278 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:42:51 localhost nova_compute[229251]: 2025-12-05 09:42:51.777 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:51 localhost nova_compute[229251]: 2025-12-05 09:42:51.779 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:51 localhost nova_compute[229251]: 2025-12-05 09:42:51.779 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:51 localhost nova_compute[229251]: 2025-12-05 09:42:51.779 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:51 localhost nova_compute[229251]: 2025-12-05 09:42:51.813 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:51 localhost nova_compute[229251]: 2025-12-05 09:42:51.814 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:52 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:42:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4892 DF PROTO=TCP SPT=37886 DPT=9101 SEQ=1248063907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC557E460000000001030307) Dec 5 04:42:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:42:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:42:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:54 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:54 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:42:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28480 DF PROTO=TCP SPT=50332 DPT=9102 SEQ=2930530576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC558A450000000001030307) Dec 5 04:42:56 localhost nova_compute[229251]: 2025-12-05 09:42:56.815 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:56 localhost nova_compute[229251]: 2025-12-05 09:42:56.817 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:42:56 localhost nova_compute[229251]: 2025-12-05 09:42:56.817 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:42:56 localhost nova_compute[229251]: 2025-12-05 09:42:56.817 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:56 localhost nova_compute[229251]: 2025-12-05 09:42:56.846 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:42:56 localhost nova_compute[229251]: 2025-12-05 09:42:56.847 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:42:56 localhost systemd[1]: var-lib-containers-storage-overlay-7f73bcae01b7702200338ad4342b9b9ed0cf919b4d312d9f1db008799e357a3f-merged.mount: Deactivated successfully. Dec 5 04:42:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:42:57 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:42:57 localhost podman[246419]: 2025-12-05 09:42:57.098195927 +0000 UTC m=+0.085032280 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:42:57 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:57 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:42:57 localhost podman[246419]: 2025-12-05 09:42:57.137617882 +0000 UTC m=+0.124454245 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:42:57 localhost podman[246419]: unhealthy Dec 5 04:42:57 localhost systemd[1]: var-lib-containers-storage-overlay-b95431b1478a17506a6e7089f072573540986d5218a9d3dfea91f5817bd1ba9b-merged.mount: Deactivated successfully. Dec 5 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 5 04:42:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57663 DF PROTO=TCP SPT=35224 DPT=9100 SEQ=2682353424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5596450000000001030307) Dec 5 04:42:59 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 5 04:42:59 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:42:59 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Failed with result 'exit-code'. Dec 5 04:43:01 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:01 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 5 04:43:01 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:43:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:43:01 localhost podman[246488]: 2025-12-05 09:43:01.600574586 +0000 UTC m=+0.097010602 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:43:01 localhost podman[246488]: 2025-12-05 09:43:01.609551418 +0000 UTC m=+0.105987424 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 04:43:01 localhost podman[246489]: 2025-12-05 09:43:01.651557581 +0000 UTC m=+0.145429230 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 04:43:01 localhost podman[246489]: 2025-12-05 09:43:01.685533251 +0000 UTC m=+0.179404900 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:43:01 localhost nova_compute[229251]: 2025-12-05 09:43:01.848 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:01 localhost nova_compute[229251]: 2025-12-05 09:43:01.849 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:01 localhost nova_compute[229251]: 2025-12-05 09:43:01.849 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:01 localhost nova_compute[229251]: 2025-12-05 09:43:01.850 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:01 localhost nova_compute[229251]: 2025-12-05 09:43:01.899 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:01 localhost nova_compute[229251]: 2025-12-05 09:43:01.900 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:02 localhost podman[239519]: time="2025-12-05T09:43:02Z" level=error msg="Getting root fs size for \"94e41b96a29b9068139ca21e4160db7bd5b4014ee1bfd974b4dbf1349571bfc2\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy" Dec 5 04:43:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:02 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:03 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:43:03 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:43:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:43:03.891 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:43:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:43:03.891 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:43:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:43:03.892 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:43:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21605 DF PROTO=TCP SPT=45672 DPT=9105 SEQ=2849432898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55A9050000000001030307) Dec 5 04:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 5 04:43:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:06 localhost systemd[1]: var-lib-containers-storage-overlay-a1f7fa99b1a308b31b9c9168cd0ceb32b0fd11de133cd67585d173dee9412861-merged.mount: Deactivated successfully. Dec 5 04:43:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:06 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:43:06 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:43:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:43:06 localhost podman[246563]: 2025-12-05 09:43:06.519415891 +0000 UTC m=+0.072479048 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 5 04:43:06 localhost podman[246563]: 2025-12-05 09:43:06.604226202 +0000 UTC m=+0.157289309 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:43:06 localhost nova_compute[229251]: 2025-12-05 09:43:06.901 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:06 localhost nova_compute[229251]: 2025-12-05 09:43:06.903 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:06 localhost nova_compute[229251]: 2025-12-05 09:43:06.903 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:06 localhost nova_compute[229251]: 2025-12-05 09:43:06.903 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:06 localhost nova_compute[229251]: 2025-12-05 09:43:06.904 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:06 localhost nova_compute[229251]: 2025-12-05 09:43:06.905 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:07 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:43:07 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 5 04:43:07 localhost systemd[1]: var-lib-containers-storage-overlay-d3249738e4d9421a1f001f331e5d9a6df3d763b74a379f4e69853dd9965f5c52-merged.mount: Deactivated successfully. Dec 5 04:43:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:07 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:08 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Dec 5 04:43:08 localhost systemd[1]: var-lib-containers-storage-overlay-0a6ced32e5cb1e91cad73934204d8f5cbbf79aafab7ac712dfe07034e18c0d6e-merged.mount: Deactivated successfully. Dec 5 04:43:08 localhost nova_compute[229251]: 2025-12-05 09:43:08.657 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:08 localhost nova_compute[229251]: 2025-12-05 09:43:08.684 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:08 localhost nova_compute[229251]: 2025-12-05 09:43:08.684 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:08 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:09 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:09 localhost nova_compute[229251]: 2025-12-05 09:43:09.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:09 localhost nova_compute[229251]: 2025-12-05 09:43:09.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:43:09 localhost nova_compute[229251]: 2025-12-05 09:43:09.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:43:09 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.251 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.251 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.251 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.251 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:43:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:10 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:10 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:10 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.913 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.930 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.931 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.932 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.933 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.933 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.934 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:10 localhost nova_compute[229251]: 2025-12-05 09:43:10.934 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:43:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56286 DF PROTO=TCP SPT=57770 DPT=9102 SEQ=1414016922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55C4060000000001030307) Dec 5 04:43:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:43:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:11 localhost podman[246588]: 2025-12-05 09:43:11.434323557 +0000 UTC m=+0.074757747 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:43:11 localhost podman[246588]: 2025-12-05 09:43:11.438305538 +0000 UTC m=+0.078739728 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:43:11 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:43:11 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.905 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.907 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.907 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.907 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.930 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.961 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:11 localhost nova_compute[229251]: 2025-12-05 09:43:11.962 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56287 DF PROTO=TCP SPT=57770 DPT=9102 SEQ=1414016922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55C8050000000001030307) Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.287 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.288 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.288 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.288 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.289 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:43:12 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:12 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:12 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.747 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.806 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.807 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:43:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21606 DF PROTO=TCP SPT=45672 DPT=9105 SEQ=2849432898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55CA460000000001030307) Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.957 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.958 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12382MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.958 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:43:12 localhost nova_compute[229251]: 2025-12-05 09:43:12.958 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.011 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.012 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.012 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.044 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:43:13 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:43:13 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.536 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.540 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.553 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.555 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:43:13 localhost nova_compute[229251]: 2025-12-05 09:43:13.555 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:43:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33219 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=747346044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55CFE70000000001030307) Dec 5 04:43:14 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:15 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:15 localhost systemd[1]: var-lib-containers-storage-overlay-2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7-merged.mount: Deactivated successfully. Dec 5 04:43:16 localhost nova_compute[229251]: 2025-12-05 09:43:16.963 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:16 localhost nova_compute[229251]: 2025-12-05 09:43:16.965 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:16 localhost nova_compute[229251]: 2025-12-05 09:43:16.965 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:16 localhost nova_compute[229251]: 2025-12-05 09:43:16.965 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:16 localhost nova_compute[229251]: 2025-12-05 09:43:16.988 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:16 localhost nova_compute[229251]: 2025-12-05 09:43:16.989 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:17 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:17 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 5 04:43:17 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 5 04:43:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33221 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=747346044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55DC050000000001030307) Dec 5 04:43:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:18 localhost systemd[1]: var-lib-containers-storage-overlay-b95431b1478a17506a6e7089f072573540986d5218a9d3dfea91f5817bd1ba9b-merged.mount: Deactivated successfully. Dec 5 04:43:18 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 5 04:43:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 5 04:43:19 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:19 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58794 DF PROTO=TCP SPT=55728 DPT=9101 SEQ=2882757220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55E7C50000000001030307) Dec 5 04:43:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:20 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:21 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:43:21 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:21 localhost podman[246655]: 2025-12-05 09:43:21.185390502 +0000 UTC m=+0.075239592 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:43:21 localhost podman[246655]: 2025-12-05 09:43:21.225780516 +0000 UTC m=+0.115629616 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350) Dec 5 04:43:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:21 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:21 localhost nova_compute[229251]: 2025-12-05 09:43:21.989 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:21 localhost nova_compute[229251]: 2025-12-05 09:43:21.990 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:21 localhost nova_compute[229251]: 2025-12-05 09:43:21.990 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:21 localhost nova_compute[229251]: 2025-12-05 09:43:21.991 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:22 localhost nova_compute[229251]: 2025-12-05 09:43:22.031 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:22 localhost nova_compute[229251]: 2025-12-05 09:43:22.031 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:22 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 5 04:43:22 localhost systemd[1]: var-lib-containers-storage-overlay-bfa8c64aed3dd43632e0f6a8595077465fd2285f2415c20f2fb7f9b03f641646-merged.mount: Deactivated successfully. Dec 5 04:43:22 localhost systemd[1]: var-lib-containers-storage-overlay-bfa8c64aed3dd43632e0f6a8595077465fd2285f2415c20f2fb7f9b03f641646-merged.mount: Deactivated successfully. Dec 5 04:43:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:22 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:43:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:43:23 localhost podman[246675]: 2025-12-05 09:43:23.16934582 +0000 UTC m=+0.061222777 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:43:23 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:23 localhost podman[246675]: 2025-12-05 09:43:23.187767899 +0000 UTC m=+0.079644846 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:43:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17213 DF PROTO=TCP SPT=54892 DPT=9101 SEQ=4029102854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55F4450000000001030307) Dec 5 04:43:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:24 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:25 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:25 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 5 04:43:25 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:43:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58796 DF PROTO=TCP SPT=55728 DPT=9101 SEQ=2882757220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC55FF850000000001030307) Dec 5 04:43:26 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Dec 5 04:43:26 localhost systemd[1]: var-lib-containers-storage-overlay-d3249738e4d9421a1f001f331e5d9a6df3d763b74a379f4e69853dd9965f5c52-merged.mount: Deactivated successfully. Dec 5 04:43:27 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:27 localhost nova_compute[229251]: 2025-12-05 09:43:27.033 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:27 localhost nova_compute[229251]: 2025-12-05 09:43:27.034 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:27 localhost nova_compute[229251]: 2025-12-05 09:43:27.034 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:27 localhost nova_compute[229251]: 2025-12-05 09:43:27.035 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:27 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:27 localhost nova_compute[229251]: 2025-12-05 09:43:27.071 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:27 localhost nova_compute[229251]: 2025-12-05 09:43:27.072 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33223 DF PROTO=TCP SPT=40578 DPT=9100 SEQ=747346044 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC560C450000000001030307) Dec 5 04:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:29 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:29 localhost podman[246694]: 2025-12-05 09:43:29.938510333 +0000 UTC m=+0.076511121 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:43:29 localhost podman[246694]: 2025-12-05 09:43:29.945640439 +0000 UTC m=+0.083641247 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:43:29 localhost podman[246694]: unhealthy Dec 5 04:43:30 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:30 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:31 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:32 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:32 localhost nova_compute[229251]: 2025-12-05 09:43:32.072 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:32 localhost nova_compute[229251]: 2025-12-05 09:43:32.074 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:32 localhost nova_compute[229251]: 2025-12-05 09:43:32.075 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:32 localhost nova_compute[229251]: 2025-12-05 09:43:32.075 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:32 localhost nova_compute[229251]: 2025-12-05 09:43:32.091 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:32 localhost nova_compute[229251]: 2025-12-05 09:43:32.092 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:32 localhost systemd[1]: var-lib-containers-storage-overlay-d3e8030a61b73ae24115cd28491b7f862cc70b395fffe88ad09999abdf65e61e-merged.mount: Deactivated successfully. Dec 5 04:43:32 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:43:32 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Failed with result 'exit-code'. Dec 5 04:43:33 localhost systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully. Dec 5 04:43:33 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Dec 5 04:43:33 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:43:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:43:33 localhost podman[246718]: 2025-12-05 09:43:33.290797884 +0000 UTC m=+0.110238443 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 5 04:43:33 localhost podman[246717]: 2025-12-05 09:43:33.26491216 +0000 UTC m=+0.094886328 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 04:43:33 localhost podman[246718]: 2025-12-05 09:43:33.327792686 +0000 UTC m=+0.147233285 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:43:33 localhost podman[246717]: 2025-12-05 09:43:33.351768413 +0000 UTC m=+0.181742571 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:43:33 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:43:33 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:43:34 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:43:34 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:43:34 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 5 04:43:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22223 DF PROTO=TCP SPT=56266 DPT=9105 SEQ=2027512583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC561E050000000001030307) Dec 5 04:43:34 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:43:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53991 DF PROTO=TCP SPT=43646 DPT=9882 SEQ=1408403909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5622450000000001030307) Dec 5 04:43:36 localhost systemd[1]: var-lib-containers-storage-overlay-2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7-merged.mount: Deactivated successfully. Dec 5 04:43:36 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:37 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:37 localhost nova_compute[229251]: 2025-12-05 09:43:37.093 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:37 localhost nova_compute[229251]: 2025-12-05 09:43:37.095 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:37 localhost nova_compute[229251]: 2025-12-05 09:43:37.095 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:37 localhost nova_compute[229251]: 2025-12-05 09:43:37.095 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:37 localhost nova_compute[229251]: 2025-12-05 09:43:37.122 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:37 localhost nova_compute[229251]: 2025-12-05 09:43:37.122 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:37 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:43:37 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 5 04:43:37 localhost podman[246754]: 2025-12-05 09:43:37.732979319 +0000 UTC m=+0.063573029 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:43:37 localhost podman[246754]: 2025-12-05 09:43:37.763652859 +0000 UTC m=+0.094246639 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 04:43:38 localhost systemd[1]: tmp-crun.16IvQ1.mount: Deactivated successfully. Dec 5 04:43:38 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:38 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:38 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:38 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:43:39 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:43:39 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:39 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:43:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:40 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Dec 5 04:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33450 DF PROTO=TCP SPT=37240 DPT=9102 SEQ=1130613020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5639360000000001030307) Dec 5 04:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.123 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.125 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.125 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.126 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.170 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.171 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:42 localhost nova_compute[229251]: 2025-12-05 09:43:42.173 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33451 DF PROTO=TCP SPT=37240 DPT=9102 SEQ=1130613020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC563D450000000001030307) Dec 5 04:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:42 localhost podman[239519]: time="2025-12-05T09:43:42Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\"" Dec 5 04:43:42 localhost podman[239519]: @ - - [05/Dec/2025:09:38:34 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1" Dec 5 04:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:43:42 localhost podman[246777]: 2025-12-05 09:43:42.930603346 +0000 UTC m=+0.066890179 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:43:42 localhost podman[246777]: 2025-12-05 09:43:42.935321169 +0000 UTC m=+0.071608002 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:43:43 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:43 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Dec 5 04:43:43 localhost systemd[1]: var-lib-containers-storage-overlay-bfa8c64aed3dd43632e0f6a8595077465fd2285f2415c20f2fb7f9b03f641646-merged.mount: Deactivated successfully. Dec 5 04:43:44 localhost systemd[1]: var-lib-containers-storage-overlay-bfa8c64aed3dd43632e0f6a8595077465fd2285f2415c20f2fb7f9b03f641646-merged.mount: Deactivated successfully. Dec 5 04:43:44 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:43:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26003 DF PROTO=TCP SPT=59688 DPT=9100 SEQ=2984304658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5645170000000001030307) Dec 5 04:43:46 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:46 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:46 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.174 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.176 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.177 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.177 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.218 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.218 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:47 localhost nova_compute[229251]: 2025-12-05 09:43:47.221 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26005 DF PROTO=TCP SPT=59688 DPT=9100 SEQ=2984304658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5651050000000001030307) Dec 5 04:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 5 04:43:49 localhost sshd[246800]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:43:49 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:49 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:49 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 5 04:43:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58992 DF PROTO=TCP SPT=57022 DPT=9101 SEQ=229474879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC565D050000000001030307) Dec 5 04:43:50 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 5 04:43:50 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:50 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 5 04:43:51 localhost sshd[246802]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:43:52 localhost nova_compute[229251]: 2025-12-05 09:43:52.222 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:52 localhost nova_compute[229251]: 2025-12-05 09:43:52.225 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:52 localhost nova_compute[229251]: 2025-12-05 09:43:52.225 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:52 localhost nova_compute[229251]: 2025-12-05 09:43:52.225 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:52 localhost nova_compute[229251]: 2025-12-05 09:43:52.268 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:52 localhost nova_compute[229251]: 2025-12-05 09:43:52.269 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:43:53 localhost systemd[1]: tmp-crun.Pz7Dn7.mount: Deactivated successfully. Dec 5 04:43:53 localhost podman[246804]: 2025-12-05 09:43:53.102315263 +0000 UTC m=+0.065137065 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, architecture=x86_64, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.) Dec 5 04:43:53 localhost podman[246804]: 2025-12-05 09:43:53.117563766 +0000 UTC m=+0.080385558 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:43:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9519 DF PROTO=TCP SPT=54016 DPT=9101 SEQ=430175687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5668450000000001030307) Dec 5 04:43:53 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 5 04:43:53 localhost sshd[246824]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:43:53 localhost systemd[1]: var-lib-containers-storage-overlay-d3e8030a61b73ae24115cd28491b7f862cc70b395fffe88ad09999abdf65e61e-merged.mount: Deactivated successfully. Dec 5 04:43:53 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:43:54 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 5 04:43:54 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:43:54 localhost sshd[246826]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:43:55 localhost systemd-logind[760]: New session 56 of user zuul. Dec 5 04:43:55 localhost systemd[1]: Started Session 56 of User zuul. Dec 5 04:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 5 04:43:55 localhost systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully. Dec 5 04:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:43:55 localhost podman[246901]: 2025-12-05 09:43:55.362023272 +0000 UTC m=+0.081957995 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true) Dec 5 04:43:55 localhost podman[246901]: 2025-12-05 09:43:55.371565571 +0000 UTC m=+0.091500344 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Dec 5 04:43:55 localhost sshd[246943]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:43:55 localhost python3.9[246939]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:43:56 localhost python3.9[247054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:43:56 localhost systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully. Dec 5 04:43:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58994 DF PROTO=TCP SPT=57022 DPT=9101 SEQ=229474879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5674C50000000001030307) Dec 5 04:43:56 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:43:56 localhost python3.9[247142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927835.7018223-3062-174148615008257/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:43:57 localhost nova_compute[229251]: 2025-12-05 09:43:57.270 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:57 localhost nova_compute[229251]: 2025-12-05 09:43:57.296 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:43:57 localhost nova_compute[229251]: 2025-12-05 09:43:57.296 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5026 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:43:57 localhost nova_compute[229251]: 2025-12-05 09:43:57.296 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:57 localhost nova_compute[229251]: 2025-12-05 09:43:57.300 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:43:57 localhost nova_compute[229251]: 2025-12-05 09:43:57.300 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:43:57 localhost sshd[247160]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:43:58 localhost python3.9[247254]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:43:58 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:43:58 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:58 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:43:59 localhost python3.9[247364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:43:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26007 DF PROTO=TCP SPT=59688 DPT=9100 SEQ=2984304658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5680450000000001030307) Dec 5 04:43:59 localhost python3.9[247421]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:00 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:44:00 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:44:00 localhost systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully. Dec 5 04:44:00 localhost python3.9[247531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:44:01 localhost python3.9[247588]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.kdntagr8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully. Dec 5 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 5 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:44:01 localhost python3.9[247698]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 5 04:44:02 localhost python3.9[247755]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:02 localhost nova_compute[229251]: 2025-12-05 09:44:02.301 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:02 localhost nova_compute[229251]: 2025-12-05 09:44:02.302 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:02 localhost nova_compute[229251]: 2025-12-05 09:44:02.302 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:44:02 localhost nova_compute[229251]: 2025-12-05 09:44:02.302 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:02 localhost nova_compute[229251]: 2025-12-05 09:44:02.335 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:02 localhost nova_compute[229251]: 2025-12-05 09:44:02.337 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:44:02 localhost podman[247773]: 2025-12-05 09:44:02.453160225 +0000 UTC m=+0.080403058 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:44:02 localhost podman[247773]: 2025-12-05 09:44:02.458413665 +0000 UTC m=+0.085656478 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:44:02 localhost podman[247773]: unhealthy Dec 5 04:44:02 localhost python3.9[247888]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:44:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:44:03.892 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:44:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:44:03.893 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:44:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:44:03.894 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:44:03 localhost python3[247999]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 5 04:44:04 localhost systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully. Dec 5 04:44:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:44:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:44:04 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Main process exited, code=exited, status=1/FAILURE Dec 5 04:44:04 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Failed with result 'exit-code'. Dec 5 04:44:04 localhost podman[239519]: @ - - [05/Dec/2025:09:38:39 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 142946 "" "Go-http-client/1.1" Dec 5 04:44:04 localhost podman_exporter[239723]: ts=2025-12-05T09:44:04.133Z caller=exporter.go:96 level=info msg="Listening on" address=:9882 Dec 5 04:44:04 localhost podman_exporter[239723]: ts=2025-12-05T09:44:04.134Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882 Dec 5 04:44:04 localhost podman_exporter[239723]: ts=2025-12-05T09:44:04.134Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882 Dec 5 04:44:04 localhost podman[248017]: 2025-12-05 09:44:04.162435265 +0000 UTC m=+0.119252836 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:44:04 localhost podman[248017]: 2025-12-05 09:44:04.171975085 +0000 UTC m=+0.128792896 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 04:44:04 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:44:04 localhost podman[248018]: 2025-12-05 09:44:04.262412137 +0000 UTC m=+0.212584986 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 04:44:04 localhost podman[248018]: 2025-12-05 09:44:04.272903195 +0000 UTC m=+0.223076114 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 04:44:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30635 DF PROTO=TCP SPT=43696 DPT=9105 SEQ=833771567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5693460000000001030307) Dec 5 04:44:04 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:44:04 localhost python3.9[248186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:04 localhost python3.9[248261]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:05 localhost python3.9[248385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:06 localhost python3.9[248442]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:07 localhost python3.9[248570]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:07 localhost nova_compute[229251]: 2025-12-05 09:44:07.337 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:07 localhost nova_compute[229251]: 2025-12-05 09:44:07.340 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:07 localhost nova_compute[229251]: 2025-12-05 09:44:07.340 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:44:07 localhost nova_compute[229251]: 2025-12-05 09:44:07.340 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:07 localhost nova_compute[229251]: 2025-12-05 09:44:07.378 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:07 localhost nova_compute[229251]: 2025-12-05 09:44:07.379 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:07 localhost python3.9[248627]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:08 localhost python3.9[248737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:44:09 localhost systemd[1]: tmp-crun.m91ynk.mount: Deactivated successfully. Dec 5 04:44:09 localhost podman[248794]: 2025-12-05 09:44:09.227219985 +0000 UTC m=+0.096700062 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:44:09 localhost podman[248794]: 2025-12-05 09:44:09.28772275 +0000 UTC m=+0.157202837 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 04:44:09 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:44:09 localhost python3.9[248795]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:09 localhost nova_compute[229251]: 2025-12-05 09:44:09.555 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:09 localhost nova_compute[229251]: 2025-12-05 09:44:09.556 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:09 localhost python3.9[248929]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.687 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.687 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.688 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:44:10 localhost nova_compute[229251]: 2025-12-05 09:44:10.688 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.014 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.030 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.031 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:44:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43868 DF PROTO=TCP SPT=42272 DPT=9102 SEQ=3167636357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC56AE670000000001030307) Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.271 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:11 localhost nova_compute[229251]: 2025-12-05 09:44:11.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:44:11 localhost python3.9[249019]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764927849.4609718-3437-104137039757786/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:12 localhost python3.9[249129]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:44:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43869 DF PROTO=TCP SPT=42272 DPT=9102 SEQ=3167636357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC56B2860000000001030307) Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.289 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.290 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.290 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.290 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.291 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.380 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.425 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.425 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5046 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.426 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.428 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.428 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30636 DF PROTO=TCP SPT=43696 DPT=9105 SEQ=833771567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC56B4450000000001030307) Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.789 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.846 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.846 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.940 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.941 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.941 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.961 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.962 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '306b85e9-619b-43c3-b352-a7c514bc198a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:12.941747', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4ce1e5a-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': 'd0efa2de8f6e58f96067ee6cc144b0e98743ea62a88dc3a928f904196cf63be2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:12.941747', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4ce299a-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '49bd9fce776882ad10b35404709913265637ec524ea95e7fb42b9b6ab567f83a'}]}, 'timestamp': '2025-12-05 09:44:12.962970', '_unique_id': 'd1aaeae5c25841c88a8150acbd03033c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.964 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.967 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 8783 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ca1b1556-19fb-4e23-87b0-8989c9362225', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8783, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:12.964730', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4ced908-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '5447a5b022934d89e80b0193c332aca2f8589873d2d4d9bf5135e59e169b7a8e'}]}, 'timestamp': '2025-12-05 09:44:12.967477', '_unique_id': '61b118ee3dec4d3fbd9e87e1096b5c7f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.968 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0fe890f1-2fcb-429a-a505-75808593c1ed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:12.968686', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4cf11b6-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': 'e4ede370f1b1d9065ccca872485281e752ad12bd0e27a9ac5c5d2dd0412fda90'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:12.968686', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4cf1972-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': 'e95f4e8364566f59d77e2731b333e4c1b96b5dac1e6e76cd0c412e02e3eb4972'}]}, 'timestamp': '2025-12-05 09:44:12.969096', '_unique_id': 'fdeabe36d186412b9f6fc77d47f8e91e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.969 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '79c48412-c9c6-4190-aa54-8549c453ac63', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:12.970133', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4cf4b68-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '9c1abd55b036b94ba8611ed3b943d884b77704b3c79ccb797902f5a84d6e9dd2'}]}, 'timestamp': '2025-12-05 09:44:12.970442', '_unique_id': '72538661725f498ea81d6d86b4e75260'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.970 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.971 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.971 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f9eb8ff-7ba1-44f3-865e-4a3dacf5511e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:12.971654', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4cf859c-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '6c53f1ae09ac9a80680e47228705618e5eaf6a02f34b8b045aa07f000958c377'}]}, 'timestamp': '2025-12-05 09:44:12.971876', '_unique_id': 'a30382bbbfda457ab24fdd769bd94fb3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.972 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ea945d4e-af2c-412d-ad05-0294d0b1eccb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:12.972881', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4cfb562-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': 'a29c9b30e6351004f9409abaa6a5b0215431ce343d74faaa4a985529a7f3f076'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:12.972881', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4cfbcc4-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '3075d4ca1f25b8d5a375bec6edae69ee7f0fbc97fc0a08e6abbfc55653585f9e'}]}, 'timestamp': '2025-12-05 09:44:12.973305', '_unique_id': 'bdfe038453124021bed866f1969e63e8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.973 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.974 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.987 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.988 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12293MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.989 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:44:12 localhost nova_compute[229251]: 2025-12-05 09:44:12.989 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.991 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.992 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '35fe880b-dd03-4226-8eb2-800090b49bb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:12.974316', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4d29d36-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.148630281, 'message_signature': 'f0e54163110176dc23d8b621131e7eaadf218c0fe4af272563ad9d5fcdf5142e'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:12.974316', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4d2a93e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.148630281, 'message_signature': '8c4ce63506ff64a44023b6bc93cff36414193c041a7d2311649597f502fa3347'}]}, 'timestamp': '2025-12-05 09:44:12.992490', '_unique_id': '83ad27b6fe674032815a81d37c45e585'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.993 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.994 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.994 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd65f7124-4c2d-49e2-840e-de45c1b7c8ad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:12.994472', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4d3029e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '996b4f1b26396d2409f54a7f284a61bde6864c162df515670434774197e33510'}]}, 'timestamp': '2025-12-05 09:44:12.994780', '_unique_id': '14da34dd26444e87861d2149c871a5d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:12.996 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 52.30859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5aa9b66b-0419-450e-8559-b1aa5f0a1e50', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.30859375, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:44:12.996138', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'f4d6be52-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.192928454, 'message_signature': '802b76efb19b2c3d4bc317425a585ecc0d46285a7762304db475fd5a04fda761'}]}, 'timestamp': '2025-12-05 09:44:13.019319', '_unique_id': '972b9a6d453942c492aa868148a752bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.020 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 161823320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 27606506 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '29c3dace-3bdb-4500-8c40-25fc426c30d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 161823320, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:13.020903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4d709ac-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '51aea0616b86c55e5798068144c0ac618bafc9b450097457ff4b991c82707503'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27606506, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:13.020903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4d7119a-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '2a6dfae5a9c84cc473a7885c2e7b569153eb7ee50075675139f980ba67a40267'}]}, 'timestamp': '2025-12-05 09:44:13.021350', '_unique_id': 'b71707f3a9a54b2caac13e3abddc9e3e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.022 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '976083ed-21b4-4109-b3b7-6e36794f6b0e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:13.022480', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4d7470a-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.148630281, 'message_signature': 'f81e09a888a157c4f775cb6c868d4ae40931b69d20e1728b0a5384431eae30e1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:13.022480', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4d74ea8-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.148630281, 'message_signature': '171a17bb5479038856286df4d88957ff52d5781205ef9972e9b83cea31b30cd9'}]}, 'timestamp': '2025-12-05 09:44:13.022882', '_unique_id': 'd7e336a9d83b4dd591157153abdabc63'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.027 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.029 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e962d435-ad81-46f6-9672-178249fc9694', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:13.029662', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4d8607c-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '29010c35e1f474aaa923f0e84fd7bceed48958bc0f78ac33d3078407f1d7f33d'}]}, 'timestamp': '2025-12-05 09:44:13.029952', '_unique_id': 'fc707b76c50e417385503a17f2f7d747'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a73f1a6-79ca-4ff0-b6c1-98de4f913422', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:13.031174', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4d89d76-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '406d19aa2cc3b009090332eed38c4fdef6de43f280c8612a250b08b4cd9e26dd'}]}, 'timestamp': '2025-12-05 09:44:13.031471', '_unique_id': '48f8bd508b844f57beae51eebadc34e3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1216962709 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 209749905 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e39bc75a-2220-4dc9-a9a1-64f5df5ba614', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1216962709, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:13.032573', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4d8d138-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '61f7f528131d98eaaf1d574a1c7d6c248ac557292c9320d706e0918172504dc6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 209749905, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:13.032573', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4d8d8f4-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': 'd9dc11609e1c021de5e88c1419ce68ad835d1d7b623a9b5e09764c58ac3ace09'}]}, 'timestamp': '2025-12-05 09:44:13.032975', '_unique_id': '50f0cace28964fdc9773ea99cf179e37'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.034 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 59600000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b7727e80-f119-4162-a68d-25ab4469c56d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 59600000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:44:13.034035', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'f4d90c3e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.192928454, 'message_signature': '7b20815f8ac3cfa4cd38e797d3cde30c4d1ed790aa7b163e1bb3428098bc43a8'}]}, 'timestamp': '2025-12-05 09:44:13.041521', '_unique_id': 'bf1efdaf38f244ab9809cfdfaee87236'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.043 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'de98f7bc-2c59-47b3-a163-28df0c3fa762', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:13.043104', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4da6d5e-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.148630281, 'message_signature': 'b4df819b4569f35cdf9546f01830fd3dc8fb9eefad1c7ad5b0b2760993b79211'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:13.043104', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4da77ae-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.148630281, 'message_signature': '9e582e6367b72ac8cee2a20773ed5c0a39543b173865b4df33681de042365b45'}]}, 'timestamp': '2025-12-05 09:44:13.043598', '_unique_id': 'c2636c61417846ba9438aeaca1ad158d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c813083f-c67b-4805-953f-c1266b829500', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:13.044651', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4daa918-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '35b8d1ac47e59651a2fbe542da546c52a5125967673aeebc5e3e3877e2349ff0'}]}, 'timestamp': '2025-12-05 09:44:13.044872', '_unique_id': '13d06ed4e66845029c56a2627cb6727a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '236701ba-882b-4727-9e24-2737d112b1b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:13.045899', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4dad992-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': 'c2e18a23ccc8e6d8ee5a4b228beacd671528e89e0b012a425b2d8fed261bf5e6'}]}, 'timestamp': '2025-12-05 09:44:13.046139', '_unique_id': '003a2d4da800413597760e4646f28e91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.051 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.051 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.051 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '983ca269-6959-43e2-bbdb-c8700beca47c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:13.047493', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4db181c-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': 'd09ae3498a16e3cfcd7c7652af25c9757606b3f4482e175e97ae435875454bb4'}]}, 'timestamp': '2025-12-05 09:44:13.047718', '_unique_id': '73718e19225148f587792a16d1a63d91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.052 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 525 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '68427ce5-4b40-40d4-aa3f-ba3f0adb3ab6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 525, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:44:13.052470', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'f4dbde78-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '6ca172765e37bbca5f31c49699e7f8485858a01f4b7d9a9486591d4887966fea'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:44:13.052470', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'f4dbebd4-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.116018042, 'message_signature': '0ba4b7e7073cdf5a6a781247b9ec1e8b8903bac8b8d43e503019d52c8b72eff0'}]}, 'timestamp': '2025-12-05 09:44:13.053181', '_unique_id': 'fbd2dd5615704f7b8e3b5bab32c41d1c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.055 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.055 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3122ae9a-252a-46c3-aa58-00b877bcbb86', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:44:13.055224', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'f4dc4958-d1be-11f0-8ba6-fa163e982365', 'monotonic_time': 10968.1390234, 'message_signature': '7d0bd344cb0b199b13c19df66be7f79243cd38879f88ea6117ec002e09034199'}]}, 'timestamp': '2025-12-05 09:44:13.055595', '_unique_id': 'e76f33e5dab842c38997fcc5285c5f00'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.057 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:44:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:44:13.057 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.084 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:44:13 localhost python3.9[249261]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.495 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.500 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.517 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.519 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:44:13 localhost nova_compute[229251]: 2025-12-05 09:44:13.520 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:44:13 localhost python3.9[249396]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:44:14 localhost podman[249414]: 2025-12-05 09:44:14.192353654 +0000 UTC m=+0.075995495 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:44:14 localhost podman[249414]: 2025-12-05 09:44:14.203324457 +0000 UTC m=+0.086966298 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:44:14 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:44:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52020 DF PROTO=TCP SPT=48322 DPT=9100 SEQ=716531243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC56BA470000000001030307) Dec 5 04:44:14 localhost python3.9[249529]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:44:15 localhost python3.9[249640]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:44:16 localhost python3.9[249752]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:44:16 localhost python3.9[249865]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:17 localhost systemd[1]: session-56.scope: Deactivated successfully. Dec 5 04:44:17 localhost systemd[1]: session-56.scope: Consumed 12.714s CPU time. Dec 5 04:44:17 localhost systemd-logind[760]: Session 56 logged out. Waiting for processes to exit. Dec 5 04:44:17 localhost systemd-logind[760]: Removed session 56. Dec 5 04:44:17 localhost nova_compute[229251]: 2025-12-05 09:44:17.429 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43871 DF PROTO=TCP SPT=42272 DPT=9102 SEQ=3167636357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC56CA460000000001030307) Dec 5 04:44:19 localhost podman[239519]: time="2025-12-05T09:44:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:44:19 localhost podman[239519]: @ - - [05/Dec/2025:09:44:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144564 "" "Go-http-client/1.1" Dec 5 04:44:19 localhost podman[239519]: @ - - [05/Dec/2025:09:44:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16342 "" "Go-http-client/1.1" Dec 5 04:44:22 localhost nova_compute[229251]: 2025-12-05 09:44:22.432 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:22 localhost nova_compute[229251]: 2025-12-05 09:44:22.434 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:22 localhost nova_compute[229251]: 2025-12-05 09:44:22.434 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:44:22 localhost nova_compute[229251]: 2025-12-05 09:44:22.435 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:22 localhost nova_compute[229251]: 2025-12-05 09:44:22.490 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:22 localhost nova_compute[229251]: 2025-12-05 09:44:22.490 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:23 localhost sshd[249884]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:44:23 localhost systemd-logind[760]: New session 57 of user zuul. Dec 5 04:44:23 localhost systemd[1]: Started Session 57 of User zuul. Dec 5 04:44:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:44:24 localhost podman[249998]: 2025-12-05 09:44:24.096471699 +0000 UTC m=+0.088090501 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7) Dec 5 04:44:24 localhost podman[249998]: 2025-12-05 09:44:24.112578568 +0000 UTC m=+0.104197330 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal) Dec 5 04:44:24 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:44:24 localhost python3.9[249997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:24 localhost python3.9[250126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:25 localhost python3.9[250236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:26 localhost python3.9[250344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43872 DF PROTO=TCP SPT=42272 DPT=9102 SEQ=3167636357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC56EA460000000001030307) Dec 5 04:44:26 localhost python3.9[250430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927865.6142921-104-54390406666459/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:44:27 localhost openstack_network_exporter[241668]: ERROR 09:44:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:44:27 localhost openstack_network_exporter[241668]: ERROR 09:44:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:44:27 localhost openstack_network_exporter[241668]: ERROR 09:44:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:44:27 localhost openstack_network_exporter[241668]: Dec 5 04:44:27 localhost openstack_network_exporter[241668]: ERROR 09:44:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:44:27 localhost openstack_network_exporter[241668]: ERROR 09:44:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:44:27 localhost openstack_network_exporter[241668]: Dec 5 04:44:27 localhost systemd[1]: tmp-crun.bKufCQ.mount: Deactivated successfully. Dec 5 04:44:27 localhost podman[250500]: 2025-12-05 09:44:27.211357684 +0000 UTC m=+0.097998362 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible) Dec 5 04:44:27 localhost podman[250500]: 2025-12-05 09:44:27.254772801 +0000 UTC m=+0.141413429 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:44:27 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:44:27 localhost python3.9[250560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:27 localhost nova_compute[229251]: 2025-12-05 09:44:27.491 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:27 localhost python3.9[250646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927867.0021718-149-58685289881890/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:28 localhost python3.9[250754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:28 localhost python3.9[250840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927868.0608637-149-215540303956441/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:30 localhost python3.9[250948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:30 localhost python3.9[251034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927869.0534384-149-207787366120043/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=03af6930db24f0dd45e214ca1c18fda281555fc1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:32 localhost python3.9[251142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:32 localhost nova_compute[229251]: 2025-12-05 09:44:32.493 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:32 localhost nova_compute[229251]: 2025-12-05 09:44:32.495 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:32 localhost nova_compute[229251]: 2025-12-05 09:44:32.495 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:44:32 localhost nova_compute[229251]: 2025-12-05 09:44:32.496 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:32 localhost nova_compute[229251]: 2025-12-05 09:44:32.517 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:32 localhost nova_compute[229251]: 2025-12-05 09:44:32.518 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:32 localhost python3.9[251228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927872.032425-323-70425426023815/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=75518b9ca1c9a507fba8f4d8f8342e6edd3bf5ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:33 localhost python3.9[251336]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:44:34 localhost podman[251449]: 2025-12-05 09:44:34.247849472 +0000 UTC m=+0.071910681 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:44:34 localhost podman[251449]: 2025-12-05 09:44:34.294716213 +0000 UTC m=+0.118777472 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:44:34 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:44:34 localhost python3.9[251448]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:44:34 localhost systemd[1]: tmp-crun.v7X9sI.mount: Deactivated successfully. Dec 5 04:44:34 localhost podman[251470]: 2025-12-05 09:44:34.389523967 +0000 UTC m=+0.114134611 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:44:34 localhost podman[251470]: 2025-12-05 09:44:34.418027252 +0000 UTC m=+0.142637926 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 5 04:44:34 localhost systemd[1]: tmp-crun.MmmkIq.mount: Deactivated successfully. Dec 5 04:44:34 localhost podman[251485]: 2025-12-05 09:44:34.433068357 +0000 UTC m=+0.093446473 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 04:44:34 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:44:34 localhost podman[251485]: 2025-12-05 09:44:34.44371951 +0000 UTC m=+0.104097666 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 04:44:34 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:44:34 localhost python3.9[251621]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:35 localhost python3.9[251678]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:35 localhost python3.9[251788]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:36 localhost python3.9[251845]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:37 localhost python3.9[251955]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:37 localhost nova_compute[229251]: 2025-12-05 09:44:37.518 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:37 localhost auditd[725]: Audit daemon rotating log files Dec 5 04:44:37 localhost python3.9[252065]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:38 localhost python3.9[252122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:38 localhost python3.9[252232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:39 localhost python3.9[252289]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:44:40 localhost podman[252377]: 2025-12-05 09:44:40.200398987 +0000 UTC m=+0.083365769 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 04:44:40 localhost podman[252377]: 2025-12-05 09:44:40.263578102 +0000 UTC m=+0.146544864 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:44:40 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:44:40 localhost python3.9[252413]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:44:40 localhost systemd[1]: Reloading. Dec 5 04:44:40 localhost systemd-rc-local-generator[252449]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:44:40 localhost systemd-sysv-generator[252454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61689 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5723970000000001030307) Dec 5 04:44:42 localhost python3.9[252572]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61690 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5727850000000001030307) Dec 5 04:44:42 localhost python3.9[252629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:42 localhost nova_compute[229251]: 2025-12-05 09:44:42.520 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43873 DF PROTO=TCP SPT=42272 DPT=9102 SEQ=3167636357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC572A460000000001030307) Dec 5 04:44:43 localhost python3.9[252739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61691 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC572F850000000001030307) Dec 5 04:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:44:44 localhost python3.9[252796]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:44 localhost systemd[1]: tmp-crun.eyJU1r.mount: Deactivated successfully. Dec 5 04:44:44 localhost podman[252797]: 2025-12-05 09:44:44.405594832 +0000 UTC m=+0.090886978 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:44:44 localhost podman[252797]: 2025-12-05 09:44:44.440650184 +0000 UTC m=+0.125942300 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:44:44 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:44:45 localhost python3.9[252929]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:44:45 localhost systemd[1]: Reloading. Dec 5 04:44:45 localhost systemd-rc-local-generator[252953]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:44:45 localhost systemd-sysv-generator[252958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:44:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33456 DF PROTO=TCP SPT=37240 DPT=9102 SEQ=1130613020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5734460000000001030307) Dec 5 04:44:45 localhost systemd[1]: Starting Create netns directory... Dec 5 04:44:45 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 04:44:45 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:44:45 localhost systemd[1]: Finished Create netns directory. Dec 5 04:44:46 localhost python3.9[253081]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:44:47 localhost python3.9[253191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:44:47 localhost nova_compute[229251]: 2025-12-05 09:44:47.523 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:48 localhost python3.9[253279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927887.0274782-734-232621456756676/.source.json _original_basename=.rubbejnl follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61692 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC573F450000000001030307) Dec 5 04:44:48 localhost python3.9[253389]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:44:49 localhost podman[239519]: time="2025-12-05T09:44:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:44:49 localhost podman[239519]: @ - - [05/Dec/2025:09:44:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144562 "" "Go-http-client/1.1" Dec 5 04:44:49 localhost podman[239519]: @ - - [05/Dec/2025:09:44:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16343 "" "Go-http-client/1.1" Dec 5 04:44:50 localhost python3.9[253697]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False Dec 5 04:44:52 localhost python3.9[253807]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:44:52 localhost nova_compute[229251]: 2025-12-05 09:44:52.526 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:44:52 localhost nova_compute[229251]: 2025-12-05 09:44:52.527 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:52 localhost nova_compute[229251]: 2025-12-05 09:44:52.527 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:44:52 localhost nova_compute[229251]: 2025-12-05 09:44:52.527 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:52 localhost nova_compute[229251]: 2025-12-05 09:44:52.528 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:44:53 localhost python3.9[253917]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 04:44:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:44:55 localhost podman[253962]: 2025-12-05 09:44:55.203150497 +0000 UTC m=+0.077215652 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:44:55 localhost podman[253962]: 2025-12-05 09:44:55.223613487 +0000 UTC m=+0.097678612 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, build-date=2025-08-20T13:12:41) Dec 5 04:44:55 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:44:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61693 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5760460000000001030307) Dec 5 04:44:57 localhost openstack_network_exporter[241668]: ERROR 09:44:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:44:57 localhost openstack_network_exporter[241668]: ERROR 09:44:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:44:57 localhost openstack_network_exporter[241668]: ERROR 09:44:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:44:57 localhost openstack_network_exporter[241668]: ERROR 09:44:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:44:57 localhost openstack_network_exporter[241668]: Dec 5 04:44:57 localhost openstack_network_exporter[241668]: ERROR 09:44:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:44:57 localhost openstack_network_exporter[241668]: Dec 5 04:44:57 localhost nova_compute[229251]: 2025-12-05 09:44:57.530 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:44:57 localhost systemd[1]: tmp-crun.eI3pWy.mount: Deactivated successfully. Dec 5 04:44:57 localhost podman[254075]: 2025-12-05 09:44:57.957797854 +0000 UTC m=+0.087943057 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd) Dec 5 04:44:57 localhost podman[254075]: 2025-12-05 09:44:57.97149758 +0000 UTC m=+0.101642793 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 5 04:44:57 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:44:58 localhost python3[254076]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:44:58 localhost podman[254131]: Dec 5 04:44:58 localhost podman[254131]: 2025-12-05 09:44:58.410079709 +0000 UTC m=+0.076033906 container create 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_sriov_agent, managed_by=edpm_ansible) Dec 5 04:44:58 localhost podman[254131]: 2025-12-05 09:44:58.366016153 +0000 UTC m=+0.031970340 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Dec 5 04:44:58 localhost python3[254076]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51 --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Dec 5 04:44:59 localhost python3.9[254279]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:44:59 localhost python3.9[254391]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:00 localhost python3.9[254446]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:45:00 localhost python3.9[254555]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927900.3751767-998-89819953012153/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:02 localhost python3.9[254610]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:45:02 localhost systemd[1]: Reloading. Dec 5 04:45:02 localhost systemd-rc-local-generator[254638]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:45:02 localhost systemd-sysv-generator[254641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:02 localhost nova_compute[229251]: 2025-12-05 09:45:02.531 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:45:03 localhost python3.9[254701]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:45:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:45:03.892 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:45:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:45:03.893 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:45:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:45:03.895 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:45:03 localhost systemd[1]: Reloading. Dec 5 04:45:04 localhost systemd-rc-local-generator[254733]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:45:04 localhost systemd-sysv-generator[254736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:04 localhost systemd[1]: Starting neutron_sriov_agent container... Dec 5 04:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:45:04 localhost systemd[1]: tmp-crun.MKuTGD.mount: Deactivated successfully. Dec 5 04:45:04 localhost systemd[1]: Started libcrun container. Dec 5 04:45:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0537ae9fbea71ce05bd411ade2cc2240beec097f41af1ff378c3e940d99654/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 5 04:45:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0537ae9fbea71ce05bd411ade2cc2240beec097f41af1ff378c3e940d99654/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 04:45:04 localhost podman[254743]: 2025-12-05 09:45:04.416077085 +0000 UTC m=+0.145186184 container init 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + sudo -E kolla_set_configs Dec 5 04:45:04 localhost podman[254757]: 2025-12-05 09:45:04.446876389 +0000 UTC m=+0.053733660 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:45:04 localhost podman[254757]: 2025-12-05 09:45:04.453910822 +0000 UTC m=+0.060768093 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:45:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:45:04 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:45:04 localhost podman[254743]: 2025-12-05 09:45:04.486691916 +0000 UTC m=+0.215801005 container start 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:45:04 localhost podman[254743]: neutron_sriov_agent Dec 5 04:45:04 localhost systemd[1]: Started neutron_sriov_agent container. Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Validating config file Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Copying service configuration files Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Writing out command to execute Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.conf Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: ++ cat /run_command Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + CMD=/usr/bin/neutron-sriov-nic-agent Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + ARGS= Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + sudo kolla_copy_cacerts Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: Running command: '/usr/bin/neutron-sriov-nic-agent' Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + [[ ! -n '' ]] Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + . kolla_extend_start Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + umask 0022 Dec 5 04:45:04 localhost neutron_sriov_agent[254759]: + exec /usr/bin/neutron-sriov-nic-agent Dec 5 04:45:04 localhost podman[254789]: 2025-12-05 09:45:04.531515245 +0000 UTC m=+0.063651921 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 04:45:04 localhost podman[254789]: 2025-12-05 09:45:04.567760494 +0000 UTC m=+0.099897170 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 04:45:04 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:45:04 localhost podman[254790]: 2025-12-05 09:45:04.659515717 +0000 UTC m=+0.187538748 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 04:45:04 localhost podman[254790]: 2025-12-05 09:45:04.67051293 +0000 UTC m=+0.198535941 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm) Dec 5 04:45:04 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:45:05 localhost python3.9[254937]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:45:05 localhost systemd[1]: Stopping neutron_sriov_agent container... Dec 5 04:45:05 localhost systemd[1]: libpod-2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8.scope: Deactivated successfully. Dec 5 04:45:05 localhost podman[254942]: 2025-12-05 09:45:05.394982047 +0000 UTC m=+0.072048065 container died 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:45:05 localhost systemd[1]: tmp-crun.YyKU6r.mount: Deactivated successfully. Dec 5 04:45:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8-userdata-shm.mount: Deactivated successfully. Dec 5 04:45:05 localhost podman[254942]: 2025-12-05 09:45:05.445802298 +0000 UTC m=+0.122868286 container cleanup 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 04:45:05 localhost podman[254942]: neutron_sriov_agent Dec 5 04:45:05 localhost podman[254956]: 2025-12-05 09:45:05.447888532 +0000 UTC m=+0.049402049 container cleanup 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 04:45:05 localhost podman[254971]: 2025-12-05 09:45:05.516164853 +0000 UTC m=+0.039755538 container cleanup 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent) Dec 5 04:45:05 localhost podman[254971]: neutron_sriov_agent Dec 5 04:45:05 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Dec 5 04:45:05 localhost systemd[1]: Stopped neutron_sriov_agent container. Dec 5 04:45:05 localhost systemd[1]: Starting neutron_sriov_agent container... Dec 5 04:45:05 localhost systemd[1]: Started libcrun container. Dec 5 04:45:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0537ae9fbea71ce05bd411ade2cc2240beec097f41af1ff378c3e940d99654/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 5 04:45:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0537ae9fbea71ce05bd411ade2cc2240beec097f41af1ff378c3e940d99654/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 04:45:05 localhost podman[254983]: 2025-12-05 09:45:05.623075314 +0000 UTC m=+0.077798550 container init 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=neutron_sriov_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent) Dec 5 04:45:05 localhost podman[254983]: 2025-12-05 09:45:05.628653443 +0000 UTC m=+0.083376679 container start 2687fcba8ec158de19ad364e04d4f1fddf978c9598d570c93f43e7c52b99fde8 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3a96255b0443add41c2e945fe0ed25c525767e2fb33639e30bd67d01fa72bd51'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:45:05 localhost podman[254983]: neutron_sriov_agent Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + sudo -E kolla_set_configs Dec 5 04:45:05 localhost systemd[1]: Started neutron_sriov_agent container. Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Validating config file Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Copying service configuration files Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Writing out command to execute Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.conf Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: ++ cat /run_command Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + CMD=/usr/bin/neutron-sriov-nic-agent Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + ARGS= Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + sudo kolla_copy_cacerts Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: Running command: '/usr/bin/neutron-sriov-nic-agent' Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + [[ ! -n '' ]] Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + . kolla_extend_start Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + umask 0022 Dec 5 04:45:05 localhost neutron_sriov_agent[254996]: + exec /usr/bin/neutron-sriov-nic-agent Dec 5 04:45:06 localhost systemd-logind[760]: Session 57 logged out. Waiting for processes to exit. Dec 5 04:45:06 localhost systemd[1]: session-57.scope: Deactivated successfully. Dec 5 04:45:06 localhost systemd[1]: session-57.scope: Consumed 23.055s CPU time. Dec 5 04:45:06 localhost systemd-logind[760]: Removed session 57. Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.250 2 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.250 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.250 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.251 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.251 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.251 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.251 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005546419.localdomain'}#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.251 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bbaf027-6cd4-4106-afe9-1c974d1ee8d6 - - - - - -] RPC agent_id: nic-switch-agent.np0005546419.localdomain#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.256 2 INFO neutron.agent.agent_extensions_manager [None req-2bbaf027-6cd4-4106-afe9-1c974d1ee8d6 - - - - - -] Loaded agent extensions: ['qos']#033[00m Dec 5 04:45:07 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:07.256 2 INFO neutron.agent.agent_extensions_manager [None req-2bbaf027-6cd4-4106-afe9-1c974d1ee8d6 - - - - - -] Initializing agent extension 'qos'#033[00m Dec 5 04:45:07 localhost nova_compute[229251]: 2025-12-05 09:45:07.534 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:08 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:08.102 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bbaf027-6cd4-4106-afe9-1c974d1ee8d6 - - - - - -] Agent initialized successfully, now running... #033[00m Dec 5 04:45:08 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:08.105 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bbaf027-6cd4-4106-afe9-1c974d1ee8d6 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Dec 5 04:45:08 localhost neutron_sriov_agent[254996]: 2025-12-05 09:45:08.105 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2bbaf027-6cd4-4106-afe9-1c974d1ee8d6 - - - - - -] Agent out of sync with plugin!#033[00m Dec 5 04:45:09 localhost nova_compute[229251]: 2025-12-05 09:45:09.521 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:09 localhost nova_compute[229251]: 2025-12-05 09:45:09.549 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.761 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.761 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.762 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:45:10 localhost nova_compute[229251]: 2025-12-05 09:45:10.762 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:45:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:45:11 localhost podman[255114]: 2025-12-05 09:45:11.17765541 +0000 UTC m=+0.065594989 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:45:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45172 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5798C60000000001030307) Dec 5 04:45:11 localhost podman[255114]: 2025-12-05 09:45:11.243770506 +0000 UTC m=+0.131710065 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 5 04:45:11 localhost nova_compute[229251]: 2025-12-05 09:45:11.252 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:45:11 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:45:11 localhost nova_compute[229251]: 2025-12-05 09:45:11.273 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:45:11 localhost nova_compute[229251]: 2025-12-05 09:45:11.274 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:45:11 localhost nova_compute[229251]: 2025-12-05 09:45:11.275 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:11 localhost nova_compute[229251]: 2025-12-05 09:45:11.275 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:11 localhost sshd[255138]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:45:11 localhost systemd-logind[760]: New session 58 of user zuul. Dec 5 04:45:11 localhost systemd[1]: Started Session 58 of User zuul. Dec 5 04:45:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45173 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC579CC50000000001030307) Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.295 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.297 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.297 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.298 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.298 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.537 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.773 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.857 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:45:12 localhost nova_compute[229251]: 2025-12-05 09:45:12.858 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.062 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.064 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12169MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.064 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.064 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:45:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61694 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC57A0450000000001030307) Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.140 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.141 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.141 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.183 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:45:13 localhost python3.9[255271]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.676 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.684 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.706 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.709 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:45:13 localhost nova_compute[229251]: 2025-12-05 09:45:13.709 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:45:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45174 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC57A4C50000000001030307) Dec 5 04:45:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:45:14 localhost nova_compute[229251]: 2025-12-05 09:45:14.710 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:14 localhost nova_compute[229251]: 2025-12-05 09:45:14.710 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:45:14 localhost systemd[1]: tmp-crun.EqM7U2.mount: Deactivated successfully. Dec 5 04:45:14 localhost podman[255408]: 2025-12-05 09:45:14.77794921 +0000 UTC m=+0.104643525 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:45:14 localhost podman[255408]: 2025-12-05 09:45:14.810125105 +0000 UTC m=+0.136819470 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:45:14 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:45:14 localhost python3.9[255407]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:45:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43874 DF PROTO=TCP SPT=42272 DPT=9102 SEQ=3167636357 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC57A8460000000001030307) Dec 5 04:45:15 localhost python3.9[255493]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:45:17 localhost nova_compute[229251]: 2025-12-05 09:45:17.539 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:45:17 localhost nova_compute[229251]: 2025-12-05 09:45:17.541 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:45:17 localhost nova_compute[229251]: 2025-12-05 09:45:17.541 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:45:17 localhost nova_compute[229251]: 2025-12-05 09:45:17.542 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:45:17 localhost nova_compute[229251]: 2025-12-05 09:45:17.543 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:17 localhost nova_compute[229251]: 2025-12-05 09:45:17.544 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:45:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45175 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC57B4860000000001030307) Dec 5 04:45:19 localhost podman[239519]: time="2025-12-05T09:45:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:45:19 localhost podman[239519]: @ - - [05/Dec/2025:09:45:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146520 "" "Go-http-client/1.1" Dec 5 04:45:19 localhost podman[239519]: @ - - [05/Dec/2025:09:45:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16783 "" "Go-http-client/1.1" Dec 5 04:45:20 localhost python3.9[255605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 5 04:45:21 localhost python3.9[255718]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:22 localhost python3.9[255828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:22 localhost nova_compute[229251]: 2025-12-05 09:45:22.542 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:23 localhost python3.9[255938]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:23 localhost python3.9[256048]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:24 localhost python3.9[256158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:25 localhost python3.9[256268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:45:25 localhost podman[256379]: 2025-12-05 09:45:25.615986551 +0000 UTC m=+0.078689807 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, distribution-scope=public, version=9.6) Dec 5 04:45:25 localhost podman[256379]: 2025-12-05 09:45:25.628986166 +0000 UTC m=+0.091689432 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41) Dec 5 04:45:25 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:45:25 localhost python3.9[256378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45176 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC57D4460000000001030307) Dec 5 04:45:26 localhost python3.9[256507]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:27 localhost openstack_network_exporter[241668]: ERROR 09:45:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:45:27 localhost openstack_network_exporter[241668]: ERROR 09:45:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:45:27 localhost openstack_network_exporter[241668]: ERROR 09:45:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:45:27 localhost openstack_network_exporter[241668]: ERROR 09:45:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:45:27 localhost openstack_network_exporter[241668]: Dec 5 04:45:27 localhost openstack_network_exporter[241668]: ERROR 09:45:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:45:27 localhost openstack_network_exporter[241668]: Dec 5 04:45:27 localhost nova_compute[229251]: 2025-12-05 09:45:27.545 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:27 localhost python3.9[256595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927926.1887016-278-205242226986145/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:45:28 localhost python3.9[256703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:28 localhost podman[256704]: 2025-12-05 09:45:28.165065705 +0000 UTC m=+0.059665170 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 5 04:45:28 localhost podman[256704]: 2025-12-05 09:45:28.171849761 +0000 UTC m=+0.066449206 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:45:28 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:45:28 localhost python3.9[256807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927927.7418723-323-174412696611132/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:29 localhost python3.9[256915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:29 localhost python3.9[257001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927928.7398753-323-231818309559923/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:30 localhost python3.9[257109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:30 localhost python3.9[257195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927929.8631277-323-196137212208871/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=695dce0aab271e8dd509739e72e1a4051bed5b31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:32 localhost python3.9[257303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:32 localhost nova_compute[229251]: 2025-12-05 09:45:32.547 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:45:32 localhost nova_compute[229251]: 2025-12-05 09:45:32.548 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:32 localhost nova_compute[229251]: 2025-12-05 09:45:32.548 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:45:32 localhost nova_compute[229251]: 2025-12-05 09:45:32.548 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:45:32 localhost nova_compute[229251]: 2025-12-05 09:45:32.549 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:45:32 localhost nova_compute[229251]: 2025-12-05 09:45:32.551 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:32 localhost python3.9[257389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927931.8778522-497-41921181185139/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=75518b9ca1c9a507fba8f4d8f8342e6edd3bf5ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:33 localhost python3.9[257497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:34 localhost python3.9[257583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927933.2360287-542-182760505327562/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:45:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:45:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:45:35 localhost systemd[1]: tmp-crun.kFONQR.mount: Deactivated successfully. Dec 5 04:45:35 localhost python3.9[257691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:35 localhost podman[257693]: 2025-12-05 09:45:35.193705298 +0000 UTC m=+0.071385465 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:45:35 localhost podman[257693]: 2025-12-05 09:45:35.202719803 +0000 UTC m=+0.080400000 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 04:45:35 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:45:35 localhost podman[257692]: 2025-12-05 09:45:35.252857052 +0000 UTC m=+0.138688456 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:45:35 localhost podman[257692]: 2025-12-05 09:45:35.286179932 +0000 UTC m=+0.172011286 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:45:35 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:45:35 localhost podman[257694]: 2025-12-05 09:45:35.308086857 +0000 UTC m=+0.185959339 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Dec 5 04:45:35 localhost podman[257694]: 2025-12-05 09:45:35.321777482 +0000 UTC m=+0.199649974 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 04:45:35 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:45:35 localhost python3.9[257837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927934.292192-542-249506421819455/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:36 localhost python3.9[257945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:36 localhost python3.9[258000]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:37 localhost python3.9[258108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:37 localhost nova_compute[229251]: 2025-12-05 09:45:37.550 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:37 localhost python3.9[258194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764927936.8681538-629-237594474049630/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:38 localhost python3.9[258302]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:45:39 localhost python3.9[258414]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:39 localhost python3.9[258524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:40 localhost python3.9[258581]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:40 localhost python3.9[258691]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:41 localhost python3.9[258748]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56742 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC580DF60000000001030307) Dec 5 04:45:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:45:41 localhost systemd[1]: tmp-crun.KZEuKv.mount: Deactivated successfully. Dec 5 04:45:41 localhost podman[258859]: 2025-12-05 09:45:41.828432509 +0000 UTC m=+0.101688395 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:45:41 localhost podman[258859]: 2025-12-05 09:45:41.890582673 +0000 UTC m=+0.163838609 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:45:41 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:45:41 localhost python3.9[258858]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56743 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5812050000000001030307) Dec 5 04:45:42 localhost nova_compute[229251]: 2025-12-05 09:45:42.552 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:42 localhost nova_compute[229251]: 2025-12-05 09:45:42.555 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:42 localhost python3.9[258992]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45177 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5814450000000001030307) Dec 5 04:45:43 localhost python3.9[259049]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:43 localhost python3.9[259159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:44 localhost python3.9[259216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56744 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC581A050000000001030307) Dec 5 04:45:44 localhost python3.9[259326]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:45:45 localhost systemd[1]: Reloading. Dec 5 04:45:45 localhost systemd-rc-local-generator[259369]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:45:45 localhost systemd-sysv-generator[259373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:45:45 localhost podman[259328]: 2025-12-05 09:45:45.108721824 +0000 UTC m=+0.084894645 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost podman[259328]: 2025-12-05 09:45:45.142842079 +0000 UTC m=+0.119014910 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:45 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:45:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61695 DF PROTO=TCP SPT=36256 DPT=9102 SEQ=930635601 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC581E450000000001030307) Dec 5 04:45:46 localhost python3.9[259500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:46 localhost python3.9[259557]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:47 localhost python3.9[259667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:47 localhost nova_compute[229251]: 2025-12-05 09:45:47.557 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:47 localhost python3.9[259724]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56745 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5829C50000000001030307) Dec 5 04:45:48 localhost python3.9[259834]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:45:48 localhost systemd[1]: Reloading. Dec 5 04:45:48 localhost systemd-rc-local-generator[259863]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:45:48 localhost systemd-sysv-generator[259866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:45:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:45:49 localhost systemd[1]: Starting Create netns directory... Dec 5 04:45:49 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 5 04:45:49 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:45:49 localhost systemd[1]: Finished Create netns directory. Dec 5 04:45:49 localhost podman[239519]: time="2025-12-05T09:45:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:45:49 localhost podman[239519]: @ - - [05/Dec/2025:09:45:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146520 "" "Go-http-client/1.1" Dec 5 04:45:49 localhost podman[239519]: @ - - [05/Dec/2025:09:45:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16785 "" "Go-http-client/1.1" Dec 5 04:45:50 localhost python3.9[259987]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:45:50 localhost python3.9[260097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:45:51 localhost python3.9[260185]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764927950.457009-1073-131517994065406/.source.json _original_basename=.97kqxn5s follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:52 localhost python3.9[260295]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:45:52 localhost nova_compute[229251]: 2025-12-05 09:45:52.593 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:45:52 localhost nova_compute[229251]: 2025-12-05 09:45:52.596 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:45:52 localhost nova_compute[229251]: 2025-12-05 09:45:52.596 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5037 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:45:52 localhost nova_compute[229251]: 2025-12-05 09:45:52.597 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:45:52 localhost nova_compute[229251]: 2025-12-05 09:45:52.597 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:52 localhost nova_compute[229251]: 2025-12-05 09:45:52.598 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:45:54 localhost python3.9[260603]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Dec 5 04:45:55 localhost python3.9[260713]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:45:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:45:56 localhost systemd[1]: tmp-crun.tb8bCD.mount: Deactivated successfully. Dec 5 04:45:56 localhost podman[260823]: 2025-12-05 09:45:56.140988776 +0000 UTC m=+0.097082875 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public) Dec 5 04:45:56 localhost podman[260823]: 2025-12-05 09:45:56.153949168 +0000 UTC m=+0.110043237 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc.) Dec 5 04:45:56 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:45:56 localhost python3.9[260824]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 04:45:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56746 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC584A450000000001030307) Dec 5 04:45:57 localhost openstack_network_exporter[241668]: ERROR 09:45:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:45:57 localhost openstack_network_exporter[241668]: ERROR 09:45:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:45:57 localhost openstack_network_exporter[241668]: ERROR 09:45:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:45:57 localhost openstack_network_exporter[241668]: ERROR 09:45:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:45:57 localhost openstack_network_exporter[241668]: Dec 5 04:45:57 localhost openstack_network_exporter[241668]: ERROR 09:45:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:45:57 localhost openstack_network_exporter[241668]: Dec 5 04:45:57 localhost nova_compute[229251]: 2025-12-05 09:45:57.598 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:57 localhost nova_compute[229251]: 2025-12-05 09:45:57.600 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:45:59 localhost podman[260887]: 2025-12-05 09:45:59.183359746 +0000 UTC m=+0.070619322 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3) Dec 5 04:45:59 localhost podman[260887]: 2025-12-05 09:45:59.194127083 +0000 UTC m=+0.081386599 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 04:45:59 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:46:00 localhost python3[260997]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:46:01 localhost podman[261034]: Dec 5 04:46:01 localhost podman[261034]: 2025-12-05 09:46:01.154721473 +0000 UTC m=+0.083629377 container create 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_dhcp, container_name=neutron_dhcp_agent) Dec 5 04:46:01 localhost podman[261034]: 2025-12-05 09:46:01.106806939 +0000 UTC m=+0.035714893 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 04:46:01 localhost python3[260997]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 04:46:01 localhost python3.9[261182]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:02 localhost nova_compute[229251]: 2025-12-05 09:46:02.601 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:02 localhost nova_compute[229251]: 2025-12-05 09:46:02.603 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:02 localhost nova_compute[229251]: 2025-12-05 09:46:02.604 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:46:02 localhost nova_compute[229251]: 2025-12-05 09:46:02.604 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:02 localhost nova_compute[229251]: 2025-12-05 09:46:02.640 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:02 localhost nova_compute[229251]: 2025-12-05 09:46:02.641 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:02 localhost python3.9[261294]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:03 localhost python3.9[261349]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:46:03.893 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:46:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:46:03.894 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:46:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:46:03.895 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:46:03 localhost python3.9[261458]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764927963.3324873-1337-34302152672385/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:04 localhost python3.9[261513]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:46:04 localhost systemd[1]: Reloading. Dec 5 04:46:04 localhost systemd-rc-local-generator[261539]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:46:04 localhost systemd-sysv-generator[261542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost python3.9[261604]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:46:05 localhost systemd[1]: Reloading. Dec 5 04:46:05 localhost podman[261608]: 2025-12-05 09:46:05.554440291 +0000 UTC m=+0.075799589 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 04:46:05 localhost podman[261606]: 2025-12-05 09:46:05.625137345 +0000 UTC m=+0.148086842 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:46:05 localhost systemd-sysv-generator[261682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:46:05 localhost systemd-rc-local-generator[261675]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:46:05 localhost podman[261608]: 2025-12-05 09:46:05.639468619 +0000 UTC m=+0.160827907 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 5 04:46:05 localhost podman[261606]: 2025-12-05 09:46:05.663602771 +0000 UTC m=+0.186552228 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:46:05 localhost podman[261607]: 2025-12-05 09:46:05.709527323 +0000 UTC m=+0.232948054 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost podman[261607]: 2025-12-05 09:46:05.714332569 +0000 UTC m=+0.237753330 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:46:05 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:46:05 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:46:05 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:46:05 localhost systemd[1]: Starting neutron_dhcp_agent container... Dec 5 04:46:05 localhost systemd[1]: Started libcrun container. Dec 5 04:46:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eea5f6a5db254aadb293adbca46dc67e38e4deac5231f73ddc498e1b3da34ae/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 5 04:46:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eea5f6a5db254aadb293adbca46dc67e38e4deac5231f73ddc498e1b3da34ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 04:46:05 localhost podman[261701]: 2025-12-05 09:46:05.951289425 +0000 UTC m=+0.088480404 container init 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125) Dec 5 04:46:05 localhost podman[261701]: 2025-12-05 09:46:05.960793193 +0000 UTC m=+0.097984162 container start 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 5 04:46:05 localhost podman[261701]: neutron_dhcp_agent Dec 5 04:46:05 localhost neutron_dhcp_agent[261716]: + sudo -E kolla_set_configs Dec 5 04:46:05 localhost systemd[1]: Started neutron_dhcp_agent container. Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Validating config file Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Copying service configuration files Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Writing out command to execute Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06 Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.conf Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: ++ cat /run_command Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + CMD=/usr/bin/neutron-dhcp-agent Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + ARGS= Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + sudo kolla_copy_cacerts Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + [[ ! -n '' ]] Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + . kolla_extend_start Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: Running command: '/usr/bin/neutron-dhcp-agent' Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + umask 0022 Dec 5 04:46:06 localhost neutron_dhcp_agent[261716]: + exec /usr/bin/neutron-dhcp-agent Dec 5 04:46:07 localhost python3.9[261840]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:46:07 localhost systemd[1]: Stopping neutron_dhcp_agent container... Dec 5 04:46:07 localhost systemd[1]: libpod-5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8.scope: Deactivated successfully. Dec 5 04:46:07 localhost systemd[1]: libpod-5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8.scope: Consumed 1.248s CPU time. Dec 5 04:46:07 localhost podman[261844]: 2025-12-05 09:46:07.2320453 +0000 UTC m=+0.083373650 container died 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:46:07 localhost podman[261844]: 2025-12-05 09:46:07.279315383 +0000 UTC m=+0.130643723 container cleanup 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:46:07 localhost podman[261844]: neutron_dhcp_agent Dec 5 04:46:07 localhost podman[261882]: error opening file `/run/crun/5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8/status`: No such file or directory Dec 5 04:46:07 localhost podman[261870]: 2025-12-05 09:46:07.38341072 +0000 UTC m=+0.067448967 container cleanup 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, io.buildah.version=1.41.3, tcib_managed=true, container_name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:46:07 localhost podman[261870]: neutron_dhcp_agent Dec 5 04:46:07 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Dec 5 04:46:07 localhost systemd[1]: Stopped neutron_dhcp_agent container. Dec 5 04:46:07 localhost systemd[1]: Starting neutron_dhcp_agent container... Dec 5 04:46:07 localhost systemd[1]: Started libcrun container. Dec 5 04:46:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eea5f6a5db254aadb293adbca46dc67e38e4deac5231f73ddc498e1b3da34ae/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 5 04:46:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eea5f6a5db254aadb293adbca46dc67e38e4deac5231f73ddc498e1b3da34ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 04:46:07 localhost podman[261884]: 2025-12-05 09:46:07.516450093 +0000 UTC m=+0.107569432 container init 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_dhcp, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Dec 5 04:46:07 localhost podman[261884]: 2025-12-05 09:46:07.52523547 +0000 UTC m=+0.116354829 container start 5e145dcfaa7c5679e3fdc7736f105bc99ab83ec45859f131b5904d91f19217f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '30e3bd3e0fe80f1e9b72caaa3f2bf136636bfd9042387c88fbde2e6489f6d0bf'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3) Dec 5 04:46:07 localhost podman[261884]: neutron_dhcp_agent Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + sudo -E kolla_set_configs Dec 5 04:46:07 localhost systemd[1]: Started neutron_dhcp_agent container. Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Validating config file Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Copying service configuration files Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Writing out command to execute Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06 Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.conf Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: ++ cat /run_command Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + CMD=/usr/bin/neutron-dhcp-agent Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + ARGS= Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + sudo kolla_copy_cacerts Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + [[ ! -n '' ]] Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + . kolla_extend_start Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: Running command: '/usr/bin/neutron-dhcp-agent' Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + umask 0022 Dec 5 04:46:07 localhost neutron_dhcp_agent[261898]: + exec /usr/bin/neutron-dhcp-agent Dec 5 04:46:07 localhost nova_compute[229251]: 2025-12-05 09:46:07.642 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:08 localhost systemd[1]: session-58.scope: Deactivated successfully. Dec 5 04:46:08 localhost systemd[1]: session-58.scope: Consumed 33.694s CPU time. Dec 5 04:46:08 localhost systemd-logind[760]: Session 58 logged out. Waiting for processes to exit. Dec 5 04:46:08 localhost systemd-logind[760]: Removed session 58. Dec 5 04:46:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 09:46:08.793 261902 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 5 04:46:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 09:46:08.793 261902 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Dec 5 04:46:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 09:46:09.182 261902 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 5 04:46:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 09:46:09.488 261902 INFO neutron.agent.dhcp.agent [None req-330608cc-d7cc-4972-91a9-e76e9cab12f2 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 5 04:46:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 09:46:09.488 261902 INFO neutron.agent.dhcp.agent [None req-330608cc-d7cc-4972-91a9-e76e9cab12f2 - - - - - -] Synchronizing state complete#033[00m Dec 5 04:46:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 09:46:09.612 261902 INFO neutron.agent.dhcp.agent [None req-330608cc-d7cc-4972-91a9-e76e9cab12f2 - - - - - -] DHCP agent started#033[00m Dec 5 04:46:10 localhost nova_compute[229251]: 2025-12-05 09:46:10.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:10 localhost nova_compute[229251]: 2025-12-05 09:46:10.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:10 localhost ovn_metadata_agent[158815]: 2025-12-05 09:46:10.446 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 04:46:10 localhost ovn_metadata_agent[158815]: 2025-12-05 09:46:10.447 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 04:46:10 localhost ovn_metadata_agent[158815]: 2025-12-05 09:46:10.448 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:46:10 localhost nova_compute[229251]: 2025-12-05 09:46:10.451 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59226 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5883270000000001030307) Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.271 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.271 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.337 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.337 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.338 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.338 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.810 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.831 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:46:11 localhost nova_compute[229251]: 2025-12-05 09:46:11.832 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:46:12 localhost podman[262074]: 2025-12-05 09:46:12.07436351 +0000 UTC m=+0.079230133 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 04:46:12 localhost podman[262074]: 2025-12-05 09:46:12.142065623 +0000 UTC m=+0.146932246 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:46:12 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.272 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.272 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.272 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59227 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5887450000000001030307) Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.292 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.293 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.293 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.293 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.293 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.670 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.779 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.837 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:46:12 localhost nova_compute[229251]: 2025-12-05 09:46:12.838 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.942 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.944 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.955 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.956 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '09426aea-5f48-4062-9345-3bcdcc654224', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:12.944591', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c53aee8-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.118887162, 'message_signature': '5d6bd63e04db13be21ce4a134bf272415aa5b0fc6af87c98c53e29759b85ec78'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:12.944591', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c53c0fe-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.118887162, 'message_signature': '5a67ec2c040ee4867f86c8148b0b9be5738542f2c9152a1a83ff3e5894a70061'}]}, 'timestamp': '2025-12-05 09:46:12.956760', '_unique_id': 'adf0d81f362448d19bc53572a3601a15'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.960 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.992 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.993 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '23ba7c8e-753e-4847-9978-66f37115040d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:12.961177', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c594c2c-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': '66332943a75d1651fe5e030f2602de6bfa903c5559ed7af2173edf9a69a0b282'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:12.961177', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c596568-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': '3d75c417208fc6fe110d226d38523c5aeb7c2d539ffffee692aaaf97a9388306'}]}, 'timestamp': '2025-12-05 09:46:12.993836', '_unique_id': '416b08359a304a3a8af54a696906c115'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:12.997 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5c47d8d8-1696-447f-9beb-6c637d277f9f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:12.997155', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c5a8fe2-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '305cdbfeb367affa85f4f4e4eaf7a0c5b8f55ca82995ad9cc1f5a166faff595f'}]}, 'timestamp': '2025-12-05 09:46:13.001497', '_unique_id': '4f7b919cb14a4819b03d59838f4663ee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.003 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.003 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8bc0f8d1-17b8-49ef-bca8-f38f082e80fa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.003849', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c5b0058-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': 'bec83331d37cccc534fa7355210a27f79de0f37c6749c2f9e0d17e4c70ab0cbe'}]}, 'timestamp': '2025-12-05 09:46:13.004358', '_unique_id': 'cbc99eb9ec08492d8b22ada2dc0113bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.006 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.006 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1216962709 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.007 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 209749905 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '27c4b63e-06d7-40ad-b1d6-ad36513a8b57', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1216962709, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.006847', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c5b74de-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': '98e9e14628b4c574f7d8dabc61409ecf4253d2ec0b32225dbc140110696e9a41'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 209749905, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.006847', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c5b876c-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': '8639f589922b734d5bd7dd6fe34cd09a73e05ebad321e14308cce910f762ef25'}]}, 'timestamp': '2025-12-05 09:46:13.007757', '_unique_id': '18eb7c39f3614776968cf0a7c6a2751f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.010 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.010 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1e586e16-73df-4796-bc24-8e9ca1d3b48a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.010460', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c5c03c2-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': 'ed5e0bdc9ca588e25af8ec6aab8c9392a394acbc813f39b0b1980254deed3b0a'}]}, 'timestamp': '2025-12-05 09:46:13.010988', '_unique_id': '35deeee835004aa6ac9ff3f3d2f56949'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.013 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 60650000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df16cdfb-5c82-446d-a7f4-368c33b84167', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 60650000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:46:13.013375', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '3c5ede08-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.203309431, 'message_signature': '040a6a29e66c453403c671d3e3168364d372bb4fc43ee881afa46e22f6ecc79d'}]}, 'timestamp': '2025-12-05 09:46:13.029939', '_unique_id': 'ff9025f8bfce4e90874a0b90594df10b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95094be9-7b5f-4302-895a-950ff4bcb350', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.032786', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c5f6b52-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': 'db0c79090538a0b2000c8631465a0d7a3b6efb190863395195a57d19f39be79b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.032786', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c5f7f20-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': 'eab8b94b88d852d2b384f6fdda6166aa429bdf3b4cf7a289aa3ff650cd2a76c2'}]}, 'timestamp': '2025-12-05 09:46:13.033773', '_unique_id': 'd1aea8226b6b4e78a93c400ae60b41ac'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.036 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.036 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.038 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12113MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.038 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.038 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3b62ab38-6f53-4198-aeba-890bdaab0378', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.036190', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c5ff158-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '3cf6ccb0b7bbd072f34e8667d45e9f6883de352dfb0c97049202372c634451f9'}]}, 'timestamp': '2025-12-05 09:46:13.036726', '_unique_id': '3455a1d772424ea79a6b0159c6a2851f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.039 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3effac94-ee29-4ea8-8fd5-f3242fc4fa3a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.039296', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c606a84-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '72828c6d131be4df9ebb059649b84f59745a3ba74fa42af3134b20d5703e23e4'}]}, 'timestamp': '2025-12-05 09:46:13.039835', '_unique_id': '506ad0c8e7b54dd48d51f1065e2500da'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56747 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC588A450000000001030307) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.042 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.042 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.042 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.042 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 8783 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77c7d912-90d7-4265-8318-d21a8ef1d8fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8783, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.042687', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c60ee1e-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '5da039bd539a5320f76acc6346a9fa2c510a33512c60fc0e18eddffd9f29d1df'}]}, 'timestamp': '2025-12-05 09:46:13.043206', '_unique_id': '4e4d9e52fdd74dd39ac4e85424b5bb68'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.046 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4aac720b-d7a6-447f-af6e-b8a5038d7f5d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.046027', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c617276-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '3ec77ac79e24ed115e71378be1ea17fc0cddb34c52433250edfbbd0f68ff0b86'}]}, 'timestamp': '2025-12-05 09:46:13.046606', '_unique_id': '58751db858784abd8ad8c06b71a48cd6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.048 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.049 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 161823320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.049 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 27606506 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9adf823f-0dea-4c59-a11a-1733b0fdb193', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 161823320, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.049016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c61e904-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': 'ac98d9a2e483d0c7d8d9f2e80cf3eda8cf8f3c4094a66bb43ab861c698998b11'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27606506, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.049016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c61fb6a-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': 'fcea082c9d221eed3387f030674c3db903e44c2c69d9215a8a9ff606858c618a'}]}, 'timestamp': '2025-12-05 09:46:13.050064', '_unique_id': 'c45de066c8c047d589819ff85e054053'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.051 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.052 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.053 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36a571df-a0f0-41ad-a392-45908fa5dc2b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.052549', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c626ed8-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.118887162, 'message_signature': '224f6375fc0752eaa8f09259707d65d6999cf51cc97466f90d64e6d3340ed002'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.052549', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c62845e-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.118887162, 'message_signature': '756a7c5b02381285f75b6739a3a94d6cbadfc5ca4b891029388012a5b912c399'}]}, 'timestamp': '2025-12-05 09:46:13.053576', '_unique_id': '815d725919334ac7bfe1887cd2c1e0a1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.055 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7071f388-c80f-46b8-bec7-7253b0be3923', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.056003', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c62f556-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.118887162, 'message_signature': '877a15a7ecb990053fcb06c36126d614b996590e87ff1a2ced431cbb8eb44a1a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.056003', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c63080c-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.118887162, 'message_signature': '97b4d27c1c01f50927337d85a48eb9df197f46208ec255c9c0180ec021ef5c8e'}]}, 'timestamp': '2025-12-05 09:46:13.056935', '_unique_id': '3d0a448bb86f45f885209061d63253ab'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.059 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.059 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1bbe817-24bb-49d8-a46d-ac57a563587e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.059796', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c6389d0-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '1f21a292c103ab85ee06138603349788de8a15039f6aedd2d2d10944cf3a78c3'}]}, 'timestamp': '2025-12-05 09:46:13.060322', '_unique_id': 'c0cd26b63c5348268f9f6d0ed74fc986'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.062 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.062 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.062 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.063 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '373a3758-ba72-4ca8-96fb-1e31e77cf022', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.062935', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c640478-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': 'ee683a51c2423c5374240f73e50f34bca31ad8ee563a5dfaad29f257dbd89313'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.062935', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c6417ec-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': '72f87c773e029b9b1f84a5f92ad90671d90c7c18ccd0b5eefe182d971e1a2f51'}]}, 'timestamp': '2025-12-05 09:46:13.063896', '_unique_id': '20c91b70f71c4df3b620488e2e2e8462'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 52.30859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac9951b6-35b7-4325-ab74-6b5b53d7ead8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.30859375, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:46:13.066291', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '3c64877c-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.203309431, 'message_signature': '26189e07ed3d0666f32491fa205d293f3fb10338d089a56e8ecc6bd09a621562'}]}, 'timestamp': '2025-12-05 09:46:13.066764', '_unique_id': '71a8dae8b9c345f6b2549b79a6d77c51'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 525 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c531abc1-2702-486c-93c3-42709133e783', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 525, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:46:13.070046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3c65166a-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': 'f5e51b5935c8c69841ab2cb17e06d21bae1df62872b4fbae1fd744e724366c30'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:46:13.070046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3c6522f4-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.135620689, 'message_signature': '1b0494c52e54568bcd617a02d573765040eb82035e688b2eb13068fb03a64f5f'}]}, 'timestamp': '2025-12-05 09:46:13.070647', '_unique_id': '5b68a2e96c0344edbfcf97b6f2c855c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b12986cb-2b5a-4b04-8082-f8be256088b3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.072036', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c656610-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': 'f2d22b54b5d5b16dbe8da8bf313dbcc577daa80b32654ee21ca1285ba4af6f2f'}]}, 'timestamp': '2025-12-05 09:46:13.072476', '_unique_id': '3c1ed32cb30e44a79852d91ac73e1d3b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.073 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '40652daf-396e-40b8-96a2-70990a51014e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:46:13.074072', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3c65b3b8-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11088.171544788, 'message_signature': '8d53a24aa844d1e5110ecfdf6820190e01ea5caa33504c3a05a417cabcc59f8c'}]}, 'timestamp': '2025-12-05 09:46:13.074402', '_unique_id': '493ea4a1d43343f59fa7764dfd570da4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:46:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:46:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.093 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.093 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.093 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.127 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.556 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.562 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.599 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.602 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:46:13 localhost nova_compute[229251]: 2025-12-05 09:46:13.602 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:46:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59228 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC588F450000000001030307) Dec 5 04:46:14 localhost nova_compute[229251]: 2025-12-05 09:46:14.597 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:14 localhost nova_compute[229251]: 2025-12-05 09:46:14.598 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:14 localhost nova_compute[229251]: 2025-12-05 09:46:14.598 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:46:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45178 DF PROTO=TCP SPT=43802 DPT=9102 SEQ=1536236263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5892450000000001030307) Dec 5 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:46:16 localhost podman[262144]: 2025-12-05 09:46:16.210360564 +0000 UTC m=+0.100885010 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:46:16 localhost podman[262144]: 2025-12-05 09:46:16.248940844 +0000 UTC m=+0.139465280 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:46:16 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:46:17 localhost nova_compute[229251]: 2025-12-05 09:46:17.674 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:17 localhost nova_compute[229251]: 2025-12-05 09:46:17.676 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:17 localhost nova_compute[229251]: 2025-12-05 09:46:17.676 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:46:17 localhost nova_compute[229251]: 2025-12-05 09:46:17.677 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:17 localhost nova_compute[229251]: 2025-12-05 09:46:17.711 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:17 localhost nova_compute[229251]: 2025-12-05 09:46:17.712 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59229 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC589F050000000001030307) Dec 5 04:46:19 localhost podman[239519]: time="2025-12-05T09:46:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:46:19 localhost podman[239519]: @ - - [05/Dec/2025:09:46:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:46:19 localhost podman[239519]: @ - - [05/Dec/2025:09:46:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17223 "" "Go-http-client/1.1" Dec 5 04:46:21 localhost sshd[262168]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:46:22 localhost systemd-logind[760]: New session 59 of user zuul. Dec 5 04:46:22 localhost systemd[1]: Started Session 59 of User zuul. Dec 5 04:46:22 localhost nova_compute[229251]: 2025-12-05 09:46:22.712 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:22 localhost nova_compute[229251]: 2025-12-05 09:46:22.716 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:22 localhost nova_compute[229251]: 2025-12-05 09:46:22.716 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:46:22 localhost nova_compute[229251]: 2025-12-05 09:46:22.716 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:22 localhost nova_compute[229251]: 2025-12-05 09:46:22.743 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:22 localhost nova_compute[229251]: 2025-12-05 09:46:22.744 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:22 localhost python3.9[262279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:46:24 localhost python3.9[262391]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:46:24 localhost network[262408]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:46:24 localhost network[262409]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:46:24 localhost network[262410]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:46:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:46:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:46:26 localhost podman[262486]: 2025-12-05 09:46:26.316322036 +0000 UTC m=+0.091500744 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 5 04:46:26 localhost podman[262486]: 2025-12-05 09:46:26.332684843 +0000 UTC m=+0.107863501 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7) Dec 5 04:46:26 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:46:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59230 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC58C0450000000001030307) Dec 5 04:46:27 localhost openstack_network_exporter[241668]: ERROR 09:46:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:46:27 localhost openstack_network_exporter[241668]: ERROR 09:46:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:46:27 localhost openstack_network_exporter[241668]: ERROR 09:46:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:46:27 localhost openstack_network_exporter[241668]: ERROR 09:46:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:46:27 localhost openstack_network_exporter[241668]: Dec 5 04:46:27 localhost openstack_network_exporter[241668]: ERROR 09:46:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:46:27 localhost openstack_network_exporter[241668]: Dec 5 04:46:27 localhost nova_compute[229251]: 2025-12-05 09:46:27.744 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:27 localhost nova_compute[229251]: 2025-12-05 09:46:27.747 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:28 localhost python3.9[262665]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 5 04:46:29 localhost python3.9[262728]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:46:30 localhost podman[262731]: 2025-12-05 09:46:30.190092478 +0000 UTC m=+0.075748998 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd) Dec 5 04:46:30 localhost podman[262731]: 2025-12-05 09:46:30.201128752 +0000 UTC m=+0.086785282 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 04:46:30 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:46:32 localhost nova_compute[229251]: 2025-12-05 09:46:32.748 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:46:32 localhost nova_compute[229251]: 2025-12-05 09:46:32.749 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:32 localhost nova_compute[229251]: 2025-12-05 09:46:32.749 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:46:32 localhost nova_compute[229251]: 2025-12-05 09:46:32.750 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:32 localhost nova_compute[229251]: 2025-12-05 09:46:32.750 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:46:32 localhost nova_compute[229251]: 2025-12-05 09:46:32.752 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:33 localhost python3.9[262859]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:34 localhost python3.9[262969]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:46:34 localhost python3.9[263080]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:46:35 localhost podman[263194]: 2025-12-05 09:46:35.967845673 +0000 UTC m=+0.083867394 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 04:46:36 localhost podman[263194]: 2025-12-05 09:46:36.002282287 +0000 UTC m=+0.118304068 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:46:36 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:46:36 localhost podman[263193]: 2025-12-05 09:46:36.020783047 +0000 UTC m=+0.140201562 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 04:46:36 localhost podman[263193]: 2025-12-05 09:46:36.054413208 +0000 UTC m=+0.173831723 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:46:36 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:46:36 localhost podman[263192]: 2025-12-05 09:46:36.069109363 +0000 UTC m=+0.191197659 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:46:36 localhost python3.9[263200]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:36 localhost podman[263192]: 2025-12-05 09:46:36.101954238 +0000 UTC m=+0.224042574 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:46:36 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:46:37 localhost python3.9[263361]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:46:37 localhost nova_compute[229251]: 2025-12-05 09:46:37.752 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:38 localhost python3.9[263473]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:46:39 localhost python3.9[263585]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:46:39 localhost network[263602]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:46:39 localhost network[263603]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:46:39 localhost network[263604]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:46:40 localhost ovn_controller[153000]: 2025-12-05T09:46:40Z|00050|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory Dec 5 04:46:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43348 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC58F8570000000001030307) Dec 5 04:46:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:46:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43349 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC58FC450000000001030307) Dec 5 04:46:42 localhost nova_compute[229251]: 2025-12-05 09:46:42.754 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:42 localhost nova_compute[229251]: 2025-12-05 09:46:42.757 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:46:43 localhost podman[263672]: 2025-12-05 09:46:43.165071806 +0000 UTC m=+0.083109201 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 5 04:46:43 localhost podman[263672]: 2025-12-05 09:46:43.199551512 +0000 UTC m=+0.117588887 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:46:43 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:46:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59231 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5900450000000001030307) Dec 5 04:46:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43350 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5904460000000001030307) Dec 5 04:46:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56748 DF PROTO=TCP SPT=52606 DPT=9102 SEQ=2713468472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5908460000000001030307) Dec 5 04:46:45 localhost python3.9[263861]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 04:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:46:46 localhost podman[263972]: 2025-12-05 09:46:46.765201841 +0000 UTC m=+0.085123343 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:46:46 localhost podman[263972]: 2025-12-05 09:46:46.777581136 +0000 UTC m=+0.097502658 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:46:46 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:46:46 localhost python3.9[263971]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 5 04:46:47 localhost python3.9[264105]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:46:47 localhost nova_compute[229251]: 2025-12-05 09:46:47.756 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:48 localhost python3.9[264162]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43351 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5914050000000001030307) Dec 5 04:46:48 localhost python3.9[264272]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:49 localhost python3.9[264382]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:46:49 localhost podman[239519]: time="2025-12-05T09:46:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:46:49 localhost podman[239519]: @ - - [05/Dec/2025:09:46:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:46:49 localhost podman[239519]: @ - - [05/Dec/2025:09:46:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17226 "" "Go-http-client/1.1" Dec 5 04:46:50 localhost python3.9[264492]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:51 localhost python3.9[264604]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:51 localhost python3.9[264716]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:46:52 localhost nova_compute[229251]: 2025-12-05 09:46:52.758 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:52 localhost python3.9[264827]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:53 localhost python3.9[264937]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:54 localhost python3.9[265047]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:54 localhost python3.9[265157]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:55 localhost python3.9[265267]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:46:55 localhost python3.9[265377]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:46:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43352 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5934460000000001030307) Dec 5 04:46:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:46:56 localhost podman[265490]: 2025-12-05 09:46:56.76697611 +0000 UTC m=+0.085018094 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:46:56 localhost podman[265490]: 2025-12-05 09:46:56.806905653 +0000 UTC m=+0.124947617 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 5 04:46:56 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:46:56 localhost python3.9[265489]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:46:57 localhost openstack_network_exporter[241668]: ERROR 09:46:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:46:57 localhost openstack_network_exporter[241668]: ERROR 09:46:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:46:57 localhost openstack_network_exporter[241668]: ERROR 09:46:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:46:57 localhost openstack_network_exporter[241668]: ERROR 09:46:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:46:57 localhost openstack_network_exporter[241668]: Dec 5 04:46:57 localhost openstack_network_exporter[241668]: ERROR 09:46:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:46:57 localhost openstack_network_exporter[241668]: Dec 5 04:46:57 localhost python3.9[265619]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:46:57 localhost nova_compute[229251]: 2025-12-05 09:46:57.760 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:46:57 localhost python3.9[265676]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:46:58 localhost python3.9[265786]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:46:58 localhost python3.9[265843]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:46:59 localhost python3.9[265953]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:00 localhost python3.9[266063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:47:00 localhost podman[266121]: 2025-12-05 09:47:00.699622286 +0000 UTC m=+0.078307371 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:47:00 localhost podman[266121]: 2025-12-05 09:47:00.709434394 +0000 UTC m=+0.088119489 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:47:00 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:47:00 localhost python3.9[266120]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:01 localhost python3.9[266250]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:01 localhost python3.9[266307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:02 localhost nova_compute[229251]: 2025-12-05 09:47:02.762 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:03 localhost python3.9[266417]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:03 localhost systemd[1]: Reloading. Dec 5 04:47:03 localhost systemd-sysv-generator[266442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:47:03 localhost systemd-rc-local-generator[266437]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:47:03.894 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:47:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:47:03.894 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:47:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:47:03.896 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:47:04 localhost python3.9[266565]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:04 localhost nova_compute[229251]: 2025-12-05 09:47:04.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:04 localhost nova_compute[229251]: 2025-12-05 09:47:04.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 04:47:04 localhost python3.9[266622]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:05 localhost python3.9[266732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:05 localhost nova_compute[229251]: 2025-12-05 09:47:05.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:05 localhost nova_compute[229251]: 2025-12-05 09:47:05.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 04:47:05 localhost nova_compute[229251]: 2025-12-05 09:47:05.296 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 04:47:05 localhost nova_compute[229251]: 2025-12-05 09:47:05.297 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:05 localhost python3.9[266789]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:47:06 localhost systemd[1]: tmp-crun.CLQ1hk.mount: Deactivated successfully. Dec 5 04:47:06 localhost podman[266900]: 2025-12-05 09:47:06.22165213 +0000 UTC m=+0.094111910 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:47:06 localhost podman[266899]: 2025-12-05 09:47:06.269410311 +0000 UTC m=+0.142391688 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 04:47:06 localhost podman[266899]: 2025-12-05 09:47:06.279026044 +0000 UTC m=+0.152007421 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 04:47:06 localhost podman[266900]: 2025-12-05 09:47:06.286331795 +0000 UTC m=+0.158791585 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 04:47:06 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:47:06 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:47:06 localhost podman[266902]: 2025-12-05 09:47:06.364302155 +0000 UTC m=+0.229435583 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:47:06 localhost podman[266902]: 2025-12-05 09:47:06.373799623 +0000 UTC m=+0.238933061 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:47:06 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:47:06 localhost python3.9[266901]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:06 localhost systemd[1]: Reloading. Dec 5 04:47:06 localhost systemd-sysv-generator[266988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:47:06 localhost systemd-rc-local-generator[266984]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:06 localhost systemd[1]: Starting Create netns directory... Dec 5 04:47:06 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 5 04:47:06 localhost systemd[1]: Finished Create netns directory. Dec 5 04:47:07 localhost python3.9[267112]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:47:07 localhost nova_compute[229251]: 2025-12-05 09:47:07.763 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:08 localhost python3.9[267222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:08 localhost python3.9[267279]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:47:09 localhost python3.9[267389]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:47:10 localhost nova_compute[229251]: 2025-12-05 09:47:10.308 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:10 localhost python3.9[267499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:10 localhost python3.9[267556]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.7rg7y1s2 recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10412 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC596D860000000001030307) Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.266 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.288 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.289 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.289 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.369 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.369 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.370 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.370 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.798 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.820 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:47:11 localhost nova_compute[229251]: 2025-12-05 09:47:11.821 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:47:12 localhost python3.9[267666]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10413 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5971850000000001030307) Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.271 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.271 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.272 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.765 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.929 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43353 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5974470000000001030307) Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.947 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.948 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.948 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:47:12 localhost nova_compute[229251]: 2025-12-05 09:47:12.991 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:47:13 localhost podman[267937]: 2025-12-05 09:47:13.558764091 +0000 UTC m=+0.088260742 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 5 04:47:13 localhost podman[267937]: 2025-12-05 09:47:13.596815908 +0000 UTC m=+0.126312559 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 04:47:13 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.270 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.289 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.290 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.290 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:47:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10414 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5979860000000001030307) Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.291 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.292 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:47:14 localhost python3.9[268054]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.790 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.854 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:47:14 localhost nova_compute[229251]: 2025-12-05 09:47:14.855 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.074 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.075 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12120MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.075 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.075 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:47:15 localhost python3.9[268186]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.418 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.419 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.420 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.493 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:47:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59232 DF PROTO=TCP SPT=45742 DPT=9102 SEQ=2060925954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC597E450000000001030307) Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.567 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.568 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.586 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.615 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_ABM,HW_CPU_X86_F16C,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:47:15 localhost nova_compute[229251]: 2025-12-05 09:47:15.656 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:47:16 localhost nova_compute[229251]: 2025-12-05 09:47:16.053 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:47:16 localhost nova_compute[229251]: 2025-12-05 09:47:16.058 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:47:16 localhost nova_compute[229251]: 2025-12-05 09:47:16.071 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:47:16 localhost nova_compute[229251]: 2025-12-05 09:47:16.073 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:47:16 localhost nova_compute[229251]: 2025-12-05 09:47:16.073 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:47:16 localhost python3.9[268316]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 5 04:47:17 localhost nova_compute[229251]: 2025-12-05 09:47:17.070 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:17 localhost nova_compute[229251]: 2025-12-05 09:47:17.071 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:47:17 localhost podman[268363]: 2025-12-05 09:47:17.194303089 +0000 UTC m=+0.081489857 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:47:17 localhost podman[268363]: 2025-12-05 09:47:17.208639604 +0000 UTC m=+0.095826342 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:47:17 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:47:17 localhost nova_compute[229251]: 2025-12-05 09:47:17.768 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:17 localhost nova_compute[229251]: 2025-12-05 09:47:17.770 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10415 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5989450000000001030307) Dec 5 04:47:19 localhost podman[239519]: time="2025-12-05T09:47:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:47:19 localhost podman[239519]: @ - - [05/Dec/2025:09:47:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:47:19 localhost podman[239519]: @ - - [05/Dec/2025:09:47:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17229 "" "Go-http-client/1.1" Dec 5 04:47:20 localhost python3[268478]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:47:20 localhost python3[268478]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",#012 "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:11:02.031267563Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249482216,#012 "VirtualSize": 249482216,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:06.113425253Z",#012 Dec 5 04:47:21 localhost python3.9[268647]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:47:22 localhost python3.9[268759]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:22 localhost python3.9[268814]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:47:22 localhost nova_compute[229251]: 2025-12-05 09:47:22.770 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:23 localhost python3.9[268923]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928042.8222327-1364-58113920130161/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:24 localhost python3.9[268978]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:25 localhost python3.9[269088]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:47:26 localhost python3.9[269198]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10416 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59AA450000000001030307) Dec 5 04:47:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:47:27 localhost openstack_network_exporter[241668]: ERROR 09:47:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:47:27 localhost openstack_network_exporter[241668]: ERROR 09:47:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:47:27 localhost openstack_network_exporter[241668]: ERROR 09:47:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:47:27 localhost openstack_network_exporter[241668]: ERROR 09:47:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:47:27 localhost openstack_network_exporter[241668]: Dec 5 04:47:27 localhost openstack_network_exporter[241668]: ERROR 09:47:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:47:27 localhost openstack_network_exporter[241668]: Dec 5 04:47:27 localhost podman[269216]: 2025-12-05 09:47:27.217157805 +0000 UTC m=+0.100858446 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, release=1755695350, distribution-scope=public) Dec 5 04:47:27 localhost podman[269216]: 2025-12-05 09:47:27.233616145 +0000 UTC m=+0.117316786 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, architecture=x86_64, config_id=edpm, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 04:47:27 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:47:27 localhost python3.9[269327]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 5 04:47:27 localhost nova_compute[229251]: 2025-12-05 09:47:27.772 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:28 localhost python3.9[269437]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 5 04:47:29 localhost python3.9[269547]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:47:29 localhost python3.9[269604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:30 localhost python3.9[269714]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:47:31 localhost systemd[1]: tmp-crun.OkJp5c.mount: Deactivated successfully. Dec 5 04:47:31 localhost podman[269824]: 2025-12-05 09:47:31.205294786 +0000 UTC m=+0.086816429 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true) Dec 5 04:47:31 localhost podman[269824]: 2025-12-05 09:47:31.216894089 +0000 UTC m=+0.098415702 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 5 04:47:31 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:47:31 localhost python3.9[269830]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 5 04:47:32 localhost nova_compute[229251]: 2025-12-05 09:47:32.775 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:35 localhost python3.9[269952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 5 04:47:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:47:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:47:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:47:36 localhost podman[270066]: 2025-12-05 09:47:36.966589081 +0000 UTC m=+0.085306022 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:47:36 localhost podman[270066]: 2025-12-05 09:47:36.97907429 +0000 UTC m=+0.097791261 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:47:36 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:47:37 localhost podman[270069]: 2025-12-05 09:47:37.034716361 +0000 UTC m=+0.146643116 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:47:37 localhost podman[270069]: 2025-12-05 09:47:37.045557321 +0000 UTC m=+0.157484156 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm) Dec 5 04:47:37 localhost python3.9[270067]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:37 localhost podman[270068]: 2025-12-05 09:47:37.079335367 +0000 UTC m=+0.191342565 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 04:47:37 localhost podman[270068]: 2025-12-05 09:47:37.085527255 +0000 UTC m=+0.197534483 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:47:37 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:47:37 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:47:37 localhost nova_compute[229251]: 2025-12-05 09:47:37.776 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:47:37 localhost nova_compute[229251]: 2025-12-05 09:47:37.777 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:37 localhost nova_compute[229251]: 2025-12-05 09:47:37.777 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:47:37 localhost nova_compute[229251]: 2025-12-05 09:47:37.778 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:47:37 localhost nova_compute[229251]: 2025-12-05 09:47:37.778 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:47:38 localhost python3.9[270234]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:47:38 localhost systemd[1]: Reloading. Dec 5 04:47:38 localhost systemd-rc-local-generator[270257]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:47:38 localhost systemd-sysv-generator[270264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:47:39 localhost python3.9[270378]: ansible-ansible.builtin.service_facts Invoked Dec 5 04:47:39 localhost network[270395]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 5 04:47:39 localhost network[270396]: 'network-scripts' will be removed from distribution in near future. Dec 5 04:47:39 localhost network[270397]: It is advised to switch to 'NetworkManager' instead for network management. Dec 5 04:47:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:47:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51410 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59E2B60000000001030307) Dec 5 04:47:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51411 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59E6C50000000001030307) Dec 5 04:47:42 localhost nova_compute[229251]: 2025-12-05 09:47:42.779 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10417 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59EA460000000001030307) Dec 5 04:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:47:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51412 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59EEC50000000001030307) Dec 5 04:47:44 localhost systemd[1]: tmp-crun.UNdplN.mount: Deactivated successfully. Dec 5 04:47:44 localhost podman[270539]: 2025-12-05 09:47:44.375094782 +0000 UTC m=+0.249976878 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 04:47:44 localhost podman[270539]: 2025-12-05 09:47:44.447380308 +0000 UTC m=+0.322262394 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:47:44 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:47:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43354 DF PROTO=TCP SPT=58364 DPT=9102 SEQ=1351587478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59F2450000000001030307) Dec 5 04:47:45 localhost python3.9[270656]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:46 localhost python3.9[270767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:47 localhost python3.9[270878]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:47:47 localhost podman[270990]: 2025-12-05 09:47:47.604402394 +0000 UTC m=+0.078984921 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:47:47 localhost podman[270990]: 2025-12-05 09:47:47.613912894 +0000 UTC m=+0.088495411 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:47:47 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:47:47 localhost nova_compute[229251]: 2025-12-05 09:47:47.781 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:47 localhost nova_compute[229251]: 2025-12-05 09:47:47.784 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:47 localhost python3.9[270989]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51413 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC59FE850000000001030307) Dec 5 04:47:48 localhost python3.9[271123]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:49 localhost python3.9[271234]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:49 localhost python3.9[271345]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:49 localhost podman[239519]: time="2025-12-05T09:47:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:47:49 localhost podman[239519]: @ - - [05/Dec/2025:09:47:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:47:49 localhost podman[239519]: @ - - [05/Dec/2025:09:47:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17229 "" "Go-http-client/1.1" Dec 5 04:47:50 localhost python3.9[271456]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:47:52 localhost nova_compute[229251]: 2025-12-05 09:47:52.783 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:53 localhost python3.9[271567]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:53 localhost python3.9[271677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:54 localhost python3.9[271787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:54 localhost python3.9[271897]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:55 localhost python3.9[272007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:56 localhost python3.9[272117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51414 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A1E460000000001030307) Dec 5 04:47:56 localhost python3.9[272227]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:57 localhost openstack_network_exporter[241668]: ERROR 09:47:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:47:57 localhost openstack_network_exporter[241668]: ERROR 09:47:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:47:57 localhost openstack_network_exporter[241668]: ERROR 09:47:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:47:57 localhost openstack_network_exporter[241668]: ERROR 09:47:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:47:57 localhost openstack_network_exporter[241668]: Dec 5 04:47:57 localhost openstack_network_exporter[241668]: ERROR 09:47:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:47:57 localhost openstack_network_exporter[241668]: Dec 5 04:47:57 localhost python3.9[272337]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:57 localhost nova_compute[229251]: 2025-12-05 09:47:57.785 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:47:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:47:58 localhost podman[272426]: 2025-12-05 09:47:58.197341803 +0000 UTC m=+0.086416927 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:47:58 localhost podman[272426]: 2025-12-05 09:47:58.216718442 +0000 UTC m=+0.105793556 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64) Dec 5 04:47:58 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:47:58 localhost python3.9[272461]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:58 localhost python3.9[272577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:47:59 localhost python3.9[272687]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:00 localhost python3.9[272797]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:00 localhost python3.9[272907]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:01 localhost python3.9[273017]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:48:01 localhost podman[273128]: 2025-12-05 09:48:01.731318003 +0000 UTC m=+0.081395644 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:48:01 localhost podman[273128]: 2025-12-05 09:48:01.769619698 +0000 UTC m=+0.119697319 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:48:01 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:48:01 localhost python3.9[273127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:02 localhost python3.9[273256]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:02 localhost nova_compute[229251]: 2025-12-05 09:48:02.788 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:03 localhost python3.9[273366]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:48:03.894 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:48:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:48:03.895 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:48:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:48:03.897 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:48:04 localhost python3.9[273476]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 5 04:48:05 localhost python3.9[273586]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 5 04:48:05 localhost systemd[1]: Reloading. Dec 5 04:48:05 localhost systemd-sysv-generator[273615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:48:05 localhost systemd-rc-local-generator[273609]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:05 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:48:06 localhost python3.9[273732]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:06 localhost python3.9[273843]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:48:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:48:07 localhost podman[273845]: 2025-12-05 09:48:07.12310207 +0000 UTC m=+0.089781870 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:48:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:48:07 localhost podman[273845]: 2025-12-05 09:48:07.140639492 +0000 UTC m=+0.107319322 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:48:07 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:48:07 localhost podman[273881]: 2025-12-05 09:48:07.22710053 +0000 UTC m=+0.094035819 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:48:07 localhost podman[273881]: 2025-12-05 09:48:07.26167553 +0000 UTC m=+0.128610839 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Dec 5 04:48:07 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:48:07 localhost podman[273887]: 2025-12-05 09:48:07.279112381 +0000 UTC m=+0.139974125 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:48:07 localhost podman[273887]: 2025-12-05 09:48:07.319166338 +0000 UTC m=+0.180028202 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0) Dec 5 04:48:07 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:48:07 localhost python3.9[274015]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:07 localhost nova_compute[229251]: 2025-12-05 09:48:07.791 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:08 localhost python3.9[274126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:08 localhost python3.9[274237]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:09 localhost python3.9[274348]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:10 localhost python3.9[274459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:10 localhost python3.9[274570]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:48:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4388 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A57E60000000001030307) Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.824 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.825 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.825 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:48:11 localhost nova_compute[229251]: 2025-12-05 09:48:11.826 229255 DEBUG nova.objects.instance [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:48:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4389 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A5C050000000001030307) Dec 5 04:48:12 localhost nova_compute[229251]: 2025-12-05 09:48:12.340 229255 DEBUG nova.network.neutron [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:48:12 localhost nova_compute[229251]: 2025-12-05 09:48:12.357 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:48:12 localhost nova_compute[229251]: 2025-12-05 09:48:12.358 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:48:12 localhost nova_compute[229251]: 2025-12-05 09:48:12.358 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:12 localhost nova_compute[229251]: 2025-12-05 09:48:12.359 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:12 localhost nova_compute[229251]: 2025-12-05 09:48:12.794 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51415 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A5E460000000001030307) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.943 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.944 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.974 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 525 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.975 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0791a022-8704-4c65-b1f2-99e70a809242', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 525, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:12.944342', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83dd2230-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': '80ac656dea72a164d92b952281e9c35181229c33d107e7068e5747535ac48f98'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:12.944342', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83dd3a86-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': '3e1cb0da8f01d2596a1a3301d0b5ec11393f3d9df2421ca22ea999c16fa34f99'}]}, 'timestamp': '2025-12-05 09:48:12.976028', '_unique_id': '16e063c5937f4c02abd3adffd7bb2de0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.978 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.979 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.979 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.980 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f8546a3f-92bd-45fe-99bb-a984812d0cb6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:12.979745', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83dddf9a-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': 'f0dbc0f1e782c680d31b4cfcfc61b57a82925c4526a6d1e08200b32d0ecfbab1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:12.979745', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83ddf1f6-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': 'e6b47191f249fdbe73493932423491050ca57e34cfc504de7bd5bb3bdafb38b0'}]}, 'timestamp': '2025-12-05 09:48:12.980694', '_unique_id': 'e8bf4d07eb06426585bbf4714771dd28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.982 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.983 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1216962709 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.983 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 209749905 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a3d3d4b3-9335-4344-a67b-eace6c00ba0c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1216962709, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:12.983059', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83de6154-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': '8482653fd4447e5755327e007926268fba46372a50c2020ae96035ccf5e1bd78'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 209749905, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:12.983059', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83de7374-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': 'f3cf36edec8271de2029f06b1e209fd9742396a822e6bf8bff6b005735f8f180'}]}, 'timestamp': '2025-12-05 09:48:12.984005', '_unique_id': '79248125f4414f9b829a9ea90174c59e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.984 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.986 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.990 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13ab34f6-e760-4059-8cfe-42f3f3bcf439', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:12.986328', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83df8ae8-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': '044dce73962c74427dd810c4630c605ea6f49c341b84a55bb2dcfdf656c09b16'}]}, 'timestamp': '2025-12-05 09:48:12.991195', '_unique_id': '3e513cfe95464349aefd8af29e76acba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.993 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.993 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 161823320 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.993 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 27606506 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '02eb47a8-9d93-4b41-8a40-a1994652b8a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 161823320, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:12.993443', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83dff5dc-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': '30f3698b194795b4da1ff7654ef5388c0b3c1eb77972aa32a1e8e9fa560f096c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27606506, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:12.993443', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83e00626-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': 'ec8fc6eea7ed48f5512ea5b9611f494a885409a39d8b4f1706f2590b3d229abc'}]}, 'timestamp': '2025-12-05 09:48:12.994367', '_unique_id': 'd5a0e689755d40808b4dd56d39117389'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.996 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.996 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 446 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '84870f89-ea65-4c07-8bef-fac33e9cebc5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 446, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:12.996920', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83e07e12-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': '0b8f987a23115de480c3d9ef5a9b7a6e2dabe59b2b813f045caafa2160b4df75'}]}, 'timestamp': '2025-12-05 09:48:12.997440', '_unique_id': '4070c264810e477387cd0e0493f9518b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:12.999 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.011 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.012 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df27c9c6-5b93-4318-bf63-231e8df72c62', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:13.000062', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83e2b8a8-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.174400015, 'message_signature': '11ffe3c3c2b606c580eca586c98349c2a4dc1632320d151c5101f1c12835fbf4'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:13.000062', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83e2cece-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.174400015, 'message_signature': '50006f51429406e442e9e3a634e17ce5a3e115c6e07eefecb561179d62610f1f'}]}, 'timestamp': '2025-12-05 09:48:13.012566', '_unique_id': '50440d23cc0340bdaffdee00096867a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.015 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.016 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '512f8591-4cc3-415c-aa14-d95d36f8cc1c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:13.015686', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83e35c18-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.174400015, 'message_signature': '4f7e7624685c3e0b6e4d4efe93d360da0330cb1af8ed48c6d9e3b16c4ba1abe4'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:13.015686', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83e36fc8-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.174400015, 'message_signature': '6706d91f5efbd3684d980d2ef9f5cdbd7f2e3069dfbdfab88871b596bbceaf98'}]}, 'timestamp': '2025-12-05 09:48:13.016730', '_unique_id': 'eb07069b4ef748169845957cf6fbf2e3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.019 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.019 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '301e152c-f869-482d-bd99-f999ee89d346', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.019579', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83e3f420-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': 'f2a927410bd1179c41198fc517e15c098a60793f02511d5a4891edb724768da8'}]}, 'timestamp': '2025-12-05 09:48:13.020192', '_unique_id': 'df8ee567a49442d781f417140d976926'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.022 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5f4aa0bd-bf10-4b3b-8402-c59fbe40b1c7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.022901', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83e47666-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': 'fa0e48c8248c2067bf2e013f5ee7e77f8884df4097935c0a65968cbfcbce5caa'}]}, 'timestamp': '2025-12-05 09:48:13.023481', '_unique_id': '163d1621869b46b9af095173fdb29a6f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.025 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fce9f2d9-8314-402d-bedb-6b794efc73ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.025924', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83e4ec36-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': '7838556ffbfa6ba90d702139100ac30ada1526fb410fb2bce65a17a939b06f75'}]}, 'timestamp': '2025-12-05 09:48:13.026526', '_unique_id': '2355c1abd4574750a36c344033f11e26'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.028 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c77a8c54-bb33-432f-9083-25e16995b479', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:13.028944', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83e568aa-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.174400015, 'message_signature': '1595ee0c77a4c09d9ac78181c50ee8f5b4ba93ebba4296dcedd0d2b2485c2f56'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:13.028944', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83e57afc-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.174400015, 'message_signature': 'e24a12858e58c99bb75566b725d8fdfab93e5335ae9c9f4b274b1ca1b8f7f24e'}]}, 'timestamp': '2025-12-05 09:48:13.030139', '_unique_id': 'b7902e0559fb4afb9b7dc4d792537602'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.032 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.032 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 61690000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9888182e-929a-400a-a38f-30f3ee4aedfd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 61690000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:48:13.033057', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '83e8529a-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.222208108, 'message_signature': 'a6521e500f6e84d09ce05d8d834aa3d4f4f8e7e1b7bbe2c6f7d9ac5eefbdd8e4'}]}, 'timestamp': '2025-12-05 09:48:13.048770', '_unique_id': '6ea5954b38e04ec6b785e1e28f2e1f58'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.051 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.051 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 52.30859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e0ebcb5b-57ed-489b-b3c2-bf2fa8c7c4aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.30859375, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:48:13.051626', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '83e8d670-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.222208108, 'message_signature': 'ec3bbb0266b87f28e07589baa93205356bd5ecdfdfaca9b15b2d26e77d88d821'}]}, 'timestamp': '2025-12-05 09:48:13.052111', '_unique_id': '28e231f04e2b46f59c71c614a833c1d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.053 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.054 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.054 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '96d96e73-e9cb-4380-9740-91eee1a1fa1c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.054366', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83e94290-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': 'a5ab45e14aefa7d4badfd1fd4f4cd641ea5e463a6c5a4a3b1e800213a49709ff'}]}, 'timestamp': '2025-12-05 09:48:13.054869', '_unique_id': 'b8d7917edb314048b6b60342f129eee4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.056 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 9229 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '06106928-bd99-47c5-8471-9e3425dcdd63', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9229, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.057085', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83e9ad3e-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': 'a63df7d636dc92db06c0b3f4879b447cd552a07762e65ab4f65b05d8ee102972'}]}, 'timestamp': '2025-12-05 09:48:13.057597', '_unique_id': '8d914c2e4a424848ba890e9cb04d5a16'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.059 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.059 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3eb2aa75-4813-4e6f-a6b5-4a53bec62773', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.059974', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83ea1cba-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': '0ec8ec4bab9aba494444d3ca0ed580e8faf940a341d54842c466f37e775ae9d8'}]}, 'timestamp': '2025-12-05 09:48:13.060484', '_unique_id': 'c710ebc7e182410ea814a751bda75200'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.063 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.063 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 87 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9308ce5-2f9d-40cf-9b39-ce27df7bf407', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 87, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.063463', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83eaa7a2-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': '8c671de614ce67e0cef0fea355bd721a081cd4178d607c54bc6d828ff11fddfb'}]}, 'timestamp': '2025-12-05 09:48:13.064123', '_unique_id': 'ee46bc784a5147bdaecefb274ba2b0e0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.067 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a56859c5-8fd5-4466-bb8f-7f3fe711d35b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:48:13.067531', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '83eb464e-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.160682998, 'message_signature': 'b6b104be20e6dda73807261f4d666be6e0fb254c0902b4979aa6e32a4e07f6f8'}]}, 'timestamp': '2025-12-05 09:48:13.068182', '_unique_id': '039267fc4a484fc5bed81225375754a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.071 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e0401705-7ade-4aba-ae60-859092fde25d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:13.071457', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83ebdf6e-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': '9215d129e525e65ec7dfc95c39cc8e3ee080f94a68b14b25df487ffd533024f5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:13.071457', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83ebf990-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': 'f6d531a2bba4651df649d298d08ba9475ee6b4aa6f50f63b5c48d937fad9c65a'}]}, 'timestamp': '2025-12-05 09:48:13.072727', '_unique_id': '448ba506706447738e0801fadcfdace9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.075 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.075 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd3e862ba-ffcc-49d9-8303-b002c35bc1d0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:48:13.075854', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '83ec8c98-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': 'f5764d2b5eb7eeebff3dc192c9f52b2fdb63306a5e57a6f60d03b9bdba6475ab'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:48:13.075854', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '83eca52a-d1bf-11f0-8ba6-fa163e982365', 'monotonic_time': 11208.11863797, 'message_signature': '23ddbdf0d77c41ec8905b95f3ff0f9e021a7a76a29759617e6f381dfa715ae3a'}]}, 'timestamp': '2025-12-05 09:48:13.077125', '_unique_id': '21caf8a8f0054ff29a327ffcfd6d0b5d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:48:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:48:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 04:48:13 localhost nova_compute[229251]: 2025-12-05 09:48:13.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:13 localhost python3.9[274681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:14 localhost python3.9[274827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:14 localhost nova_compute[229251]: 2025-12-05 09:48:14.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:14 localhost nova_compute[229251]: 2025-12-05 09:48:14.270 229255 DEBUG nova.compute.manager [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:48:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4390 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A64060000000001030307) Dec 5 04:48:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:48:14 localhost python3.9[275010]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:14 localhost systemd[1]: tmp-crun.ZbPUuc.mount: Deactivated successfully. Dec 5 04:48:14 localhost podman[275011]: 2025-12-05 09:48:14.777510423 +0000 UTC m=+0.218937194 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:48:14 localhost podman[275011]: 2025-12-05 09:48:14.860642479 +0000 UTC m=+0.302069240 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:48:15 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:48:15 localhost podman[275012]: 2025-12-05 09:48:15.152489858 +0000 UTC m=+0.589694791 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_CLEAN=True, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , release=1763362218, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Dec 5 04:48:15 localhost podman[275012]: 2025-12-05 09:48:15.236734248 +0000 UTC m=+0.673939201 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, version=7, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True) Dec 5 04:48:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10418 DF PROTO=TCP SPT=43850 DPT=9102 SEQ=1026315529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A68450000000001030307) Dec 5 04:48:15 localhost python3.9[275175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:16 localhost python3.9[275357]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.269 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.287 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.287 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.287 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.288 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.288 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:48:16 localhost python3.9[275519]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.668 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.731 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.733 229255 DEBUG nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.948 229255 WARNING nova.virt.libvirt.driver [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.950 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12120MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.950 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:48:16 localhost nova_compute[229251]: 2025-12-05 09:48:16.951 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.007 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.007 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.007 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.043 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:48:17 localhost python3.9[275649]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.491 229255 DEBUG oslo_concurrency.processutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.500 229255 DEBUG nova.compute.provider_tree [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.517 229255 DEBUG nova.scheduler.client.report [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.521 229255 DEBUG nova.compute.resource_tracker [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.522 229255 DEBUG oslo_concurrency.lockutils [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:48:17 localhost python3.9[275781]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:17 localhost nova_compute[229251]: 2025-12-05 09:48:17.797 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:48:18 localhost systemd[1]: tmp-crun.qP3VTo.mount: Deactivated successfully. Dec 5 04:48:18 localhost podman[275890]: 2025-12-05 09:48:18.22250954 +0000 UTC m=+0.101318050 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:48:18 localhost podman[275890]: 2025-12-05 09:48:18.232374859 +0000 UTC m=+0.111183359 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:48:18 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:48:18 localhost python3.9[275897]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4391 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A73C50000000001030307) Dec 5 04:48:18 localhost nova_compute[229251]: 2025-12-05 09:48:18.519 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:18 localhost nova_compute[229251]: 2025-12-05 09:48:18.519 229255 DEBUG oslo_service.periodic_task [None req-bc560823-2221-446b-a6f5-ad15d8a79a78 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:48:18 localhost python3.9[276024]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:19 localhost podman[239519]: time="2025-12-05T09:48:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:48:19 localhost podman[239519]: @ - - [05/Dec/2025:09:48:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:48:19 localhost podman[239519]: @ - - [05/Dec/2025:09:48:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17228 "" "Go-http-client/1.1" Dec 5 04:48:22 localhost nova_compute[229251]: 2025-12-05 09:48:22.799 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:25 localhost python3.9[276134]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 5 04:48:26 localhost sshd[276153]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:48:26 localhost systemd-logind[760]: New session 60 of user zuul. Dec 5 04:48:26 localhost systemd[1]: Started Session 60 of User zuul. Dec 5 04:48:26 localhost systemd[1]: session-60.scope: Deactivated successfully. Dec 5 04:48:26 localhost systemd-logind[760]: Session 60 logged out. Waiting for processes to exit. Dec 5 04:48:26 localhost systemd-logind[760]: Removed session 60. Dec 5 04:48:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4392 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5A94450000000001030307) Dec 5 04:48:27 localhost python3.9[276264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:27 localhost openstack_network_exporter[241668]: ERROR 09:48:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:48:27 localhost openstack_network_exporter[241668]: ERROR 09:48:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:48:27 localhost openstack_network_exporter[241668]: ERROR 09:48:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:48:27 localhost openstack_network_exporter[241668]: ERROR 09:48:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:48:27 localhost openstack_network_exporter[241668]: Dec 5 04:48:27 localhost openstack_network_exporter[241668]: ERROR 09:48:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:48:27 localhost openstack_network_exporter[241668]: Dec 5 04:48:27 localhost python3.9[276350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928106.6569934-3037-107469914566443/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:27 localhost nova_compute[229251]: 2025-12-05 09:48:27.802 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:28 localhost python3.9[276458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:28 localhost python3.9[276513]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:48:29 localhost python3.9[276621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:29 localhost podman[276622]: 2025-12-05 09:48:29.200043926 +0000 UTC m=+0.082252891 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350) Dec 5 04:48:29 localhost podman[276622]: 2025-12-05 09:48:29.212739061 +0000 UTC m=+0.094948006 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:48:29 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:48:29 localhost python3.9[276727]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928108.6962032-3037-38150589693213/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:30 localhost python3.9[276835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:30 localhost python3.9[276921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928109.7787588-3037-213898156566215/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d097af55d0e0f04c3b1a46e6ef4206c1c28f58b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:31 localhost python3.9[277029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:31 localhost python3.9[277115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928110.8109877-3037-175858386123915/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:48:32 localhost podman[277218]: 2025-12-05 09:48:32.236670673 +0000 UTC m=+0.122834384 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS) Dec 5 04:48:32 localhost python3.9[277229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:32 localhost podman[277218]: 2025-12-05 09:48:32.359562247 +0000 UTC m=+0.245725958 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:48:32 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:48:32 localhost nova_compute[229251]: 2025-12-05 09:48:32.804 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:32 localhost python3.9[277326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764928111.8791227-3037-133226889644705/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:33 localhost python3.9[277436]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:34 localhost python3.9[277546]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:35 localhost python3.9[277656]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:35 localhost python3.9[277768]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:36 localhost python3.9[277876]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:37 localhost python3.9[277986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:37 localhost python3.9[278041]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:37 localhost nova_compute[229251]: 2025-12-05 09:48:37.807 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:37 localhost nova_compute[229251]: 2025-12-05 09:48:37.810 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:48:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:48:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:48:38 localhost podman[278150]: 2025-12-05 09:48:38.191910681 +0000 UTC m=+0.075958849 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:48:38 localhost podman[278150]: 2025-12-05 09:48:38.200616055 +0000 UTC m=+0.084664253 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:48:38 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:48:38 localhost python3.9[278149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 5 04:48:38 localhost podman[278151]: 2025-12-05 09:48:38.255680379 +0000 UTC m=+0.136375995 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Dec 5 04:48:38 localhost podman[278151]: 2025-12-05 09:48:38.288544078 +0000 UTC m=+0.169239714 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:48:38 localhost podman[278152]: 2025-12-05 09:48:38.297663575 +0000 UTC m=+0.175063841 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 5 04:48:38 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:48:38 localhost podman[278152]: 2025-12-05 09:48:38.307492604 +0000 UTC m=+0.184892790 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 04:48:38 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:48:38 localhost python3.9[278264]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 5 04:48:39 localhost python3.9[278374]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 5 04:48:40 localhost python3.9[278484]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:48:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4328 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5ACD160000000001030307) Dec 5 04:48:41 localhost python3[278594]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:48:41 localhost python3[278594]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 5 04:48:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4329 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5AD1050000000001030307) Dec 5 04:48:42 localhost python3.9[278764]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:42 localhost nova_compute[229251]: 2025-12-05 09:48:42.810 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4393 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5AD4450000000001030307) Dec 5 04:48:43 localhost python3.9[278876]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 5 04:48:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4330 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5AD9050000000001030307) Dec 5 04:48:44 localhost python3.9[278986]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 5 04:48:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:48:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51416 DF PROTO=TCP SPT=34124 DPT=9102 SEQ=3644027079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5ADC460000000001030307) Dec 5 04:48:45 localhost podman[279097]: 2025-12-05 09:48:45.19592765 +0000 UTC m=+0.080058474 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller) Dec 5 04:48:45 localhost podman[279097]: 2025-12-05 09:48:45.265085871 +0000 UTC m=+0.149216745 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:48:45 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:48:45 localhost python3[279096]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 5 04:48:45 localhost python3[279096]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 5 04:48:46 localhost python3.9[279295]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:47 localhost python3.9[279407]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:47 localhost nova_compute[229251]: 2025-12-05 09:48:47.813 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:48 localhost python3.9[279516]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764928127.4759185-3715-42307180963080/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:48:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4331 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5AE8C50000000001030307) Dec 5 04:48:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:48:48 localhost systemd[1]: tmp-crun.PEbFwu.mount: Deactivated successfully. Dec 5 04:48:48 localhost podman[279572]: 2025-12-05 09:48:48.538490534 +0000 UTC m=+0.084756407 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:48:48 localhost podman[279572]: 2025-12-05 09:48:48.559640637 +0000 UTC m=+0.105906530 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:48:48 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:48:48 localhost python3.9[279571]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:48:49 localhost podman[239519]: time="2025-12-05T09:48:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:48:49 localhost podman[239519]: @ - - [05/Dec/2025:09:48:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:48:49 localhost podman[239519]: @ - - [05/Dec/2025:09:48:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17216 "" "Go-http-client/1.1" Dec 5 04:48:50 localhost python3.9[279704]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:51 localhost python3.9[279812]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:52 localhost python3.9[279920]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 5 04:48:52 localhost nova_compute[229251]: 2025-12-05 09:48:52.816 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:48:52 localhost nova_compute[229251]: 2025-12-05 09:48:52.818 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:48:52 localhost nova_compute[229251]: 2025-12-05 09:48:52.818 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:48:52 localhost nova_compute[229251]: 2025-12-05 09:48:52.818 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:48:52 localhost nova_compute[229251]: 2025-12-05 09:48:52.834 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:52 localhost nova_compute[229251]: 2025-12-05 09:48:52.835 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:48:53 localhost python3.9[280030]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 5 04:48:53 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 119.8 (399 of 333 items), suggesting rotation. Dec 5 04:48:53 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 04:48:53 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:48:53 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:48:54 localhost python3.9[280167]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 5 04:48:55 localhost systemd[1]: Stopping nova_compute container... Dec 5 04:48:55 localhost nova_compute[229251]: 2025-12-05 09:48:55.790 229255 DEBUG oslo_privsep.comm [-] EOF on privsep read channel _reader_main /usr/lib/python3.9/site-packages/oslo_privsep/comm.py:170#033[00m Dec 5 04:48:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4332 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B08460000000001030307) Dec 5 04:48:57 localhost openstack_network_exporter[241668]: ERROR 09:48:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:48:57 localhost openstack_network_exporter[241668]: ERROR 09:48:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:48:57 localhost openstack_network_exporter[241668]: ERROR 09:48:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:48:57 localhost openstack_network_exporter[241668]: ERROR 09:48:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:48:57 localhost openstack_network_exporter[241668]: Dec 5 04:48:57 localhost openstack_network_exporter[241668]: ERROR 09:48:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:48:57 localhost openstack_network_exporter[241668]: Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.836 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.838 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.838 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.838 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.839 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.843 229255 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.946 229255 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.948 229255 DEBUG oslo_concurrency.lockutils [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.948 229255 DEBUG oslo_concurrency.lockutils [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:48:57 localhost nova_compute[229251]: 2025-12-05 09:48:57.948 229255 DEBUG oslo_concurrency.lockutils [None req-8db6cb8c-6041-42da-b4df-2fa0b37e88fb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:48:58 localhost journal[202456]: End of file while reading data: Input/output error Dec 5 04:48:58 localhost systemd[1]: libpod-ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06.scope: Deactivated successfully. Dec 5 04:48:58 localhost systemd[1]: libpod-ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06.scope: Consumed 19.141s CPU time. Dec 5 04:48:58 localhost podman[280171]: 2025-12-05 09:48:58.335035623 +0000 UTC m=+2.617053808 container died ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:48:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06-userdata-shm.mount: Deactivated successfully. Dec 5 04:48:58 localhost podman[280171]: 2025-12-05 09:48:58.487649961 +0000 UTC m=+2.769668116 container cleanup ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=edpm) Dec 5 04:48:58 localhost podman[280171]: nova_compute Dec 5 04:48:58 localhost podman[280211]: error opening file `/run/crun/ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06/status`: No such file or directory Dec 5 04:48:58 localhost podman[280197]: 2025-12-05 09:48:58.586846236 +0000 UTC m=+0.070778463 container cleanup ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:48:58 localhost podman[280197]: nova_compute Dec 5 04:48:58 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 5 04:48:58 localhost systemd[1]: Stopped nova_compute container. Dec 5 04:48:58 localhost systemd[1]: Starting nova_compute container... Dec 5 04:48:58 localhost systemd[1]: Started libcrun container. Dec 5 04:48:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 5 04:48:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 5 04:48:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 5 04:48:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 5 04:48:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ee4a2275a3876d5c4522139af8d011adfeda71d27b37dfd1e03c455c8750af/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 04:48:58 localhost podman[280213]: 2025-12-05 09:48:58.726568221 +0000 UTC m=+0.109160598 container init ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 5 04:48:58 localhost podman[280213]: 2025-12-05 09:48:58.733202053 +0000 UTC m=+0.115794440 container start ffc12c8658d27d317197102bfc938181915e468a7af8a5db4e8d897f216c4e06 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 04:48:58 localhost podman[280213]: nova_compute Dec 5 04:48:58 localhost nova_compute[280228]: + sudo -E kolla_set_configs Dec 5 04:48:58 localhost systemd[1]: Started nova_compute container. Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Validating config file Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying service configuration files Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /etc/ceph Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Creating directory /etc/ceph Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/ceph Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Writing out command to execute Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:48:58 localhost nova_compute[280228]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 5 04:48:58 localhost nova_compute[280228]: ++ cat /run_command Dec 5 04:48:58 localhost nova_compute[280228]: + CMD=nova-compute Dec 5 04:48:58 localhost nova_compute[280228]: + ARGS= Dec 5 04:48:58 localhost nova_compute[280228]: + sudo kolla_copy_cacerts Dec 5 04:48:58 localhost nova_compute[280228]: + [[ ! -n '' ]] Dec 5 04:48:58 localhost nova_compute[280228]: + . kolla_extend_start Dec 5 04:48:58 localhost nova_compute[280228]: + echo 'Running command: '\''nova-compute'\''' Dec 5 04:48:58 localhost nova_compute[280228]: Running command: 'nova-compute' Dec 5 04:48:58 localhost nova_compute[280228]: + umask 0022 Dec 5 04:48:58 localhost nova_compute[280228]: + exec nova-compute Dec 5 04:48:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:48:59 localhost podman[280257]: 2025-12-05 09:48:59.439618319 +0000 UTC m=+0.076635339 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6) Dec 5 04:48:59 localhost podman[280257]: 2025-12-05 09:48:59.451719677 +0000 UTC m=+0.088736727 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350) Dec 5 04:48:59 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.366 280232 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.367 280232 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.367 280232 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.367 280232 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.477 280232 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.498 280232 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.499 280232 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.882 280232 INFO nova.virt.driver [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 5 04:49:00 localhost nova_compute[280228]: 2025-12-05 09:49:00.993 280232 INFO nova.compute.provider_config [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.001 280232 DEBUG oslo_concurrency.lockutils [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.002 280232 DEBUG oslo_concurrency.lockutils [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.002 280232 DEBUG oslo_concurrency.lockutils [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.003 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.003 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.003 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.003 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.003 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.004 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.004 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.004 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.004 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.004 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.005 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.005 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.005 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.005 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.005 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.006 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.006 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.006 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.006 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.006 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] console_host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.007 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.007 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.007 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.007 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.007 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.008 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.008 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.008 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.008 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.008 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.009 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.009 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.009 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.009 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.009 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.010 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.010 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.010 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.010 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] host = np0005546419.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.011 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.011 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.011 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.011 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.012 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.012 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.012 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.012 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.012 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.012 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.013 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.013 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.013 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.013 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.013 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.014 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.014 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.014 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.014 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.014 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.015 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.015 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.015 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.015 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.015 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.016 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.016 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.016 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.016 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.016 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.017 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.017 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.017 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.017 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.017 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.018 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.018 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.018 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.018 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.018 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] my_block_storage_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] my_ip = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.019 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.020 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.021 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.022 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.023 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.024 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.025 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.026 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.027 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.028 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.028 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.028 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.028 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.028 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.028 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.029 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.030 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.031 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.031 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.031 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.031 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.031 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.031 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.032 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.033 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.033 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.033 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.033 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.033 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.033 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.034 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.035 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.036 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.037 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.038 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.039 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.040 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.041 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.042 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.043 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.044 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.045 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.046 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.047 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.048 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.049 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.050 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.051 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.052 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.053 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.054 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.055 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.055 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.055 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.055 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.055 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.055 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.056 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.057 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.057 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.057 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.057 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.057 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.057 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.058 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.059 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.059 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.059 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.059 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.059 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.059 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.060 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.061 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.061 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.061 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.061 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.061 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.061 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.062 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.063 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.064 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.065 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.066 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.067 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.068 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.069 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.070 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.071 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.072 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.073 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.074 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.074 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.074 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.074 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.074 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.074 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.075 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.075 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.075 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.075 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.075 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.075 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.076 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.077 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.077 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.077 280232 WARNING oslo_config.cfg [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 5 04:49:01 localhost nova_compute[280228]: live_migration_uri is deprecated for removal in favor of two other options that Dec 5 04:49:01 localhost nova_compute[280228]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 5 04:49:01 localhost nova_compute[280228]: and ``live_migration_inbound_addr`` respectively. Dec 5 04:49:01 localhost nova_compute[280228]: ). Its value may be silently ignored in the future.#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.077 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.077 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.077 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.078 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.078 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.078 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.078 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.078 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.078 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.079 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rbd_secret_uuid = 79feddb1-4bfc-557f-83b9-0d57c9f66c1b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.080 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.081 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.082 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.083 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.084 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.085 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.086 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.087 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.088 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.089 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.090 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.091 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.092 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.093 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.094 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.095 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.096 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.097 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.097 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.097 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.097 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.097 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.097 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.098 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.099 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.100 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.101 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.102 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.103 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.103 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.103 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.103 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.103 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.103 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.104 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.105 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.106 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.106 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.106 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.106 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.106 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.106 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.107 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.108 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.108 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.108 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.108 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.108 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.108 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.109 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.110 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.111 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.112 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.112 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.112 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.112 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.112 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.112 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.113 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.114 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.106 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.114 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.114 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.114 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.114 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.114 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.115 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.116 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.117 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.118 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.118 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.118 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.118 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.118 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.118 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.119 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.120 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.121 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.121 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.121 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.121 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.121 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.121 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.122 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.123 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.124 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.124 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.125 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.126 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.127 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.127 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.127 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.127 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.127 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.127 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.128 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.129 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.130 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.130 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.130 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.130 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.130 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.130 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.131 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.132 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.133 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.134 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.135 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.136 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.137 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.137 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.137 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.137 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.137 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.137 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.138 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.139 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.139 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.139 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.139 280232 DEBUG oslo_service.service [None req-3b7296e6-c950-41a4-818c-a406a6fa5dc3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.140 280232 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.162 280232 INFO nova.virt.node [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Determined node identity 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from /var/lib/nova/compute_id#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.163 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.164 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.164 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.164 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.175 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.177 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.180 280232 INFO nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Connection event '1' reason 'None'#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.188 280232 INFO nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Libvirt host capabilities Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 2b745fc2-5ba6-425d-9fc8-d9117ea29cbc Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: x86_64 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v4 Dec 5 04:49:01 localhost nova_compute[280228]: AMD Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tcp Dec 5 04:49:01 localhost nova_compute[280228]: rdma Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 16116612 Dec 5 04:49:01 localhost nova_compute[280228]: 4029153 Dec 5 04:49:01 localhost nova_compute[280228]: 0 Dec 5 04:49:01 localhost nova_compute[280228]: 0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: selinux Dec 5 04:49:01 localhost nova_compute[280228]: 0 Dec 5 04:49:01 localhost nova_compute[280228]: system_u:system_r:svirt_t:s0 Dec 5 04:49:01 localhost nova_compute[280228]: system_u:system_r:svirt_tcg_t:s0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: dac Dec 5 04:49:01 localhost nova_compute[280228]: 0 Dec 5 04:49:01 localhost nova_compute[280228]: +107:+107 Dec 5 04:49:01 localhost nova_compute[280228]: +107:+107 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: hvm Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 32 Dec 5 04:49:01 localhost nova_compute[280228]: /usr/libexec/qemu-kvm Dec 5 04:49:01 localhost nova_compute[280228]: pc-i440fx-rhel7.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.8.0 Dec 5 04:49:01 localhost nova_compute[280228]: q35 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.4.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.5.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.3.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel7.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.4.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.2.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.2.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.0.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.0.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.1.0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: hvm Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 64 Dec 5 04:49:01 localhost nova_compute[280228]: /usr/libexec/qemu-kvm Dec 5 04:49:01 localhost nova_compute[280228]: pc-i440fx-rhel7.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.8.0 Dec 5 04:49:01 localhost nova_compute[280228]: q35 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.4.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.5.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.3.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel7.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.4.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.2.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.2.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.0.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.0.0 Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel8.1.0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: #033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.196 280232 DEBUG nova.virt.libvirt.volume.mount [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.202 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.206 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/libexec/qemu-kvm Dec 5 04:49:01 localhost nova_compute[280228]: kvm Dec 5 04:49:01 localhost nova_compute[280228]: pc-i440fx-rhel7.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: i686 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: rom Dec 5 04:49:01 localhost nova_compute[280228]: pflash Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: yes Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: AMD Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 486 Dec 5 04:49:01 localhost nova_compute[280228]: 486-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Conroe Dec 5 04:49:01 localhost nova_compute[280228]: Conroe-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-IBPB Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v4 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v1 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v2 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v6 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v7 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Penryn Dec 5 04:49:01 localhost nova_compute[280228]: Penryn-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Westmere Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v2 Dec 5 04:49:01 localhost nova_compute[280228]: athlon Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: athlon-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: kvm32 Dec 5 04:49:01 localhost nova_compute[280228]: kvm32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: n270 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: n270-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pentium Dec 5 04:49:01 localhost nova_compute[280228]: pentium-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: phenom Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: phenom-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu32 Dec 5 04:49:01 localhost nova_compute[280228]: qemu32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: anonymous Dec 5 04:49:01 localhost nova_compute[280228]: memfd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: disk Dec 5 04:49:01 localhost nova_compute[280228]: cdrom Dec 5 04:49:01 localhost nova_compute[280228]: floppy Dec 5 04:49:01 localhost nova_compute[280228]: lun Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: ide Dec 5 04:49:01 localhost nova_compute[280228]: fdc Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: sata Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: vnc Dec 5 04:49:01 localhost nova_compute[280228]: egl-headless Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: subsystem Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: mandatory Dec 5 04:49:01 localhost nova_compute[280228]: requisite Dec 5 04:49:01 localhost nova_compute[280228]: optional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: pci Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: random Dec 5 04:49:01 localhost nova_compute[280228]: egd Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: path Dec 5 04:49:01 localhost nova_compute[280228]: handle Dec 5 04:49:01 localhost nova_compute[280228]: virtiofs Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tpm-tis Dec 5 04:49:01 localhost nova_compute[280228]: tpm-crb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: emulator Dec 5 04:49:01 localhost nova_compute[280228]: external Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 2.0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: passt Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: isa Dec 5 04:49:01 localhost nova_compute[280228]: hyperv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: null Dec 5 04:49:01 localhost nova_compute[280228]: vc Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: dev Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: pipe Dec 5 04:49:01 localhost nova_compute[280228]: stdio Dec 5 04:49:01 localhost nova_compute[280228]: udp Dec 5 04:49:01 localhost nova_compute[280228]: tcp Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: qemu-vdagent Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: relaxed Dec 5 04:49:01 localhost nova_compute[280228]: vapic Dec 5 04:49:01 localhost nova_compute[280228]: spinlocks Dec 5 04:49:01 localhost nova_compute[280228]: vpindex Dec 5 04:49:01 localhost nova_compute[280228]: runtime Dec 5 04:49:01 localhost nova_compute[280228]: synic Dec 5 04:49:01 localhost nova_compute[280228]: stimer Dec 5 04:49:01 localhost nova_compute[280228]: reset Dec 5 04:49:01 localhost nova_compute[280228]: vendor_id Dec 5 04:49:01 localhost nova_compute[280228]: frequencies Dec 5 04:49:01 localhost nova_compute[280228]: reenlightenment Dec 5 04:49:01 localhost nova_compute[280228]: tlbflush Dec 5 04:49:01 localhost nova_compute[280228]: ipi Dec 5 04:49:01 localhost nova_compute[280228]: avic Dec 5 04:49:01 localhost nova_compute[280228]: emsr_bitmap Dec 5 04:49:01 localhost nova_compute[280228]: xmm_input Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 4095 Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Linux KVM Hv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tdx Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.220 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/libexec/qemu-kvm Dec 5 04:49:01 localhost nova_compute[280228]: kvm Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.8.0 Dec 5 04:49:01 localhost nova_compute[280228]: i686 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: rom Dec 5 04:49:01 localhost nova_compute[280228]: pflash Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: yes Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: AMD Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 486 Dec 5 04:49:01 localhost nova_compute[280228]: 486-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Conroe Dec 5 04:49:01 localhost nova_compute[280228]: Conroe-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-IBPB Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v4 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v1 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v2 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v6 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v7 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Penryn Dec 5 04:49:01 localhost nova_compute[280228]: Penryn-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Westmere Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v2 Dec 5 04:49:01 localhost nova_compute[280228]: athlon Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: athlon-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: kvm32 Dec 5 04:49:01 localhost nova_compute[280228]: kvm32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: n270 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: n270-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pentium Dec 5 04:49:01 localhost nova_compute[280228]: pentium-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: phenom Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: phenom-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu32 Dec 5 04:49:01 localhost nova_compute[280228]: qemu32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: anonymous Dec 5 04:49:01 localhost nova_compute[280228]: memfd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: disk Dec 5 04:49:01 localhost nova_compute[280228]: cdrom Dec 5 04:49:01 localhost nova_compute[280228]: floppy Dec 5 04:49:01 localhost nova_compute[280228]: lun Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: fdc Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: sata Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: vnc Dec 5 04:49:01 localhost nova_compute[280228]: egl-headless Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: subsystem Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: mandatory Dec 5 04:49:01 localhost nova_compute[280228]: requisite Dec 5 04:49:01 localhost nova_compute[280228]: optional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: pci Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: random Dec 5 04:49:01 localhost nova_compute[280228]: egd Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: path Dec 5 04:49:01 localhost nova_compute[280228]: handle Dec 5 04:49:01 localhost nova_compute[280228]: virtiofs Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tpm-tis Dec 5 04:49:01 localhost nova_compute[280228]: tpm-crb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: emulator Dec 5 04:49:01 localhost nova_compute[280228]: external Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 2.0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: passt Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: isa Dec 5 04:49:01 localhost nova_compute[280228]: hyperv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: null Dec 5 04:49:01 localhost nova_compute[280228]: vc Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: dev Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: pipe Dec 5 04:49:01 localhost nova_compute[280228]: stdio Dec 5 04:49:01 localhost nova_compute[280228]: udp Dec 5 04:49:01 localhost nova_compute[280228]: tcp Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: qemu-vdagent Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: relaxed Dec 5 04:49:01 localhost nova_compute[280228]: vapic Dec 5 04:49:01 localhost nova_compute[280228]: spinlocks Dec 5 04:49:01 localhost nova_compute[280228]: vpindex Dec 5 04:49:01 localhost nova_compute[280228]: runtime Dec 5 04:49:01 localhost nova_compute[280228]: synic Dec 5 04:49:01 localhost nova_compute[280228]: stimer Dec 5 04:49:01 localhost nova_compute[280228]: reset Dec 5 04:49:01 localhost nova_compute[280228]: vendor_id Dec 5 04:49:01 localhost nova_compute[280228]: frequencies Dec 5 04:49:01 localhost nova_compute[280228]: reenlightenment Dec 5 04:49:01 localhost nova_compute[280228]: tlbflush Dec 5 04:49:01 localhost nova_compute[280228]: ipi Dec 5 04:49:01 localhost nova_compute[280228]: avic Dec 5 04:49:01 localhost nova_compute[280228]: emsr_bitmap Dec 5 04:49:01 localhost nova_compute[280228]: xmm_input Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 4095 Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Linux KVM Hv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tdx Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.234 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.238 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/libexec/qemu-kvm Dec 5 04:49:01 localhost nova_compute[280228]: kvm Dec 5 04:49:01 localhost nova_compute[280228]: pc-i440fx-rhel7.6.0 Dec 5 04:49:01 localhost nova_compute[280228]: x86_64 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: rom Dec 5 04:49:01 localhost nova_compute[280228]: pflash Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: yes Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: AMD Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 486 Dec 5 04:49:01 localhost nova_compute[280228]: 486-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Conroe Dec 5 04:49:01 localhost nova_compute[280228]: Conroe-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-IBPB Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v4 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v1 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v2 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v6 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v7 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Penryn Dec 5 04:49:01 localhost nova_compute[280228]: Penryn-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Westmere Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v2 Dec 5 04:49:01 localhost nova_compute[280228]: athlon Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: athlon-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: kvm32 Dec 5 04:49:01 localhost nova_compute[280228]: kvm32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: n270 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: n270-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pentium Dec 5 04:49:01 localhost nova_compute[280228]: pentium-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: phenom Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: phenom-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu32 Dec 5 04:49:01 localhost nova_compute[280228]: qemu32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: anonymous Dec 5 04:49:01 localhost nova_compute[280228]: memfd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: disk Dec 5 04:49:01 localhost nova_compute[280228]: cdrom Dec 5 04:49:01 localhost nova_compute[280228]: floppy Dec 5 04:49:01 localhost nova_compute[280228]: lun Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: ide Dec 5 04:49:01 localhost nova_compute[280228]: fdc Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: sata Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: vnc Dec 5 04:49:01 localhost nova_compute[280228]: egl-headless Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: subsystem Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: mandatory Dec 5 04:49:01 localhost nova_compute[280228]: requisite Dec 5 04:49:01 localhost nova_compute[280228]: optional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: pci Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: random Dec 5 04:49:01 localhost nova_compute[280228]: egd Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: path Dec 5 04:49:01 localhost nova_compute[280228]: handle Dec 5 04:49:01 localhost nova_compute[280228]: virtiofs Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tpm-tis Dec 5 04:49:01 localhost nova_compute[280228]: tpm-crb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: emulator Dec 5 04:49:01 localhost nova_compute[280228]: external Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 2.0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: passt Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: isa Dec 5 04:49:01 localhost nova_compute[280228]: hyperv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: null Dec 5 04:49:01 localhost nova_compute[280228]: vc Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: dev Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: pipe Dec 5 04:49:01 localhost nova_compute[280228]: stdio Dec 5 04:49:01 localhost nova_compute[280228]: udp Dec 5 04:49:01 localhost nova_compute[280228]: tcp Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: qemu-vdagent Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: relaxed Dec 5 04:49:01 localhost nova_compute[280228]: vapic Dec 5 04:49:01 localhost nova_compute[280228]: spinlocks Dec 5 04:49:01 localhost nova_compute[280228]: vpindex Dec 5 04:49:01 localhost nova_compute[280228]: runtime Dec 5 04:49:01 localhost nova_compute[280228]: synic Dec 5 04:49:01 localhost nova_compute[280228]: stimer Dec 5 04:49:01 localhost nova_compute[280228]: reset Dec 5 04:49:01 localhost nova_compute[280228]: vendor_id Dec 5 04:49:01 localhost nova_compute[280228]: frequencies Dec 5 04:49:01 localhost nova_compute[280228]: reenlightenment Dec 5 04:49:01 localhost nova_compute[280228]: tlbflush Dec 5 04:49:01 localhost nova_compute[280228]: ipi Dec 5 04:49:01 localhost nova_compute[280228]: avic Dec 5 04:49:01 localhost nova_compute[280228]: emsr_bitmap Dec 5 04:49:01 localhost nova_compute[280228]: xmm_input Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 4095 Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Linux KVM Hv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tdx Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.295 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/libexec/qemu-kvm Dec 5 04:49:01 localhost nova_compute[280228]: kvm Dec 5 04:49:01 localhost nova_compute[280228]: pc-q35-rhel9.8.0 Dec 5 04:49:01 localhost nova_compute[280228]: x86_64 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: efi Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 5 04:49:01 localhost nova_compute[280228]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: rom Dec 5 04:49:01 localhost nova_compute[280228]: pflash Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: yes Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: yes Dec 5 04:49:01 localhost nova_compute[280228]: no Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: AMD Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 486 Dec 5 04:49:01 localhost nova_compute[280228]: 486-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Broadwell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cascadelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Conroe Dec 5 04:49:01 localhost nova_compute[280228]: Conroe-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Cooperlake-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Denverton-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dhyana-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Genoa-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-IBPB Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Milan-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-Rome-v4 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v1 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v2 Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: EPYC-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: GraniteRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Haswell-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-noTSX Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v6 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Icelake-Server-v7 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: IvyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: KnightsMill-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Nehalem-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G1-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G4-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Opteron_G5-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Penryn Dec 5 04:49:01 localhost nova_compute[280228]: Penryn-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: SandyBridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SapphireRapids-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: SierraForest-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Client-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-noTSX-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Skylake-Server-v5 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v2 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v3 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Snowridge-v4 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Westmere Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-IBRS Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Westmere-v2 Dec 5 04:49:01 localhost nova_compute[280228]: athlon Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: athlon-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: core2duo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: coreduo-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: kvm32 Dec 5 04:49:01 localhost nova_compute[280228]: kvm32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64 Dec 5 04:49:01 localhost nova_compute[280228]: kvm64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: n270 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: n270-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pentium Dec 5 04:49:01 localhost nova_compute[280228]: pentium-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2 Dec 5 04:49:01 localhost nova_compute[280228]: pentium2-v1 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3 Dec 5 04:49:01 localhost nova_compute[280228]: pentium3-v1 Dec 5 04:49:01 localhost nova_compute[280228]: phenom Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: phenom-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu32 Dec 5 04:49:01 localhost nova_compute[280228]: qemu32-v1 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64 Dec 5 04:49:01 localhost nova_compute[280228]: qemu64-v1 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: anonymous Dec 5 04:49:01 localhost nova_compute[280228]: memfd Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: disk Dec 5 04:49:01 localhost nova_compute[280228]: cdrom Dec 5 04:49:01 localhost nova_compute[280228]: floppy Dec 5 04:49:01 localhost nova_compute[280228]: lun Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: fdc Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: sata Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: vnc Dec 5 04:49:01 localhost nova_compute[280228]: egl-headless Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: subsystem Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: mandatory Dec 5 04:49:01 localhost nova_compute[280228]: requisite Dec 5 04:49:01 localhost nova_compute[280228]: optional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: pci Dec 5 04:49:01 localhost nova_compute[280228]: scsi Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: virtio Dec 5 04:49:01 localhost nova_compute[280228]: virtio-transitional Dec 5 04:49:01 localhost nova_compute[280228]: virtio-non-transitional Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: random Dec 5 04:49:01 localhost nova_compute[280228]: egd Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: path Dec 5 04:49:01 localhost nova_compute[280228]: handle Dec 5 04:49:01 localhost nova_compute[280228]: virtiofs Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tpm-tis Dec 5 04:49:01 localhost nova_compute[280228]: tpm-crb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: emulator Dec 5 04:49:01 localhost nova_compute[280228]: external Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 2.0 Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: usb Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: qemu Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: builtin Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: default Dec 5 04:49:01 localhost nova_compute[280228]: passt Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: isa Dec 5 04:49:01 localhost nova_compute[280228]: hyperv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: null Dec 5 04:49:01 localhost nova_compute[280228]: vc Dec 5 04:49:01 localhost nova_compute[280228]: pty Dec 5 04:49:01 localhost nova_compute[280228]: dev Dec 5 04:49:01 localhost nova_compute[280228]: file Dec 5 04:49:01 localhost nova_compute[280228]: pipe Dec 5 04:49:01 localhost nova_compute[280228]: stdio Dec 5 04:49:01 localhost nova_compute[280228]: udp Dec 5 04:49:01 localhost nova_compute[280228]: tcp Dec 5 04:49:01 localhost nova_compute[280228]: unix Dec 5 04:49:01 localhost nova_compute[280228]: qemu-vdagent Dec 5 04:49:01 localhost nova_compute[280228]: dbus Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: relaxed Dec 5 04:49:01 localhost nova_compute[280228]: vapic Dec 5 04:49:01 localhost nova_compute[280228]: spinlocks Dec 5 04:49:01 localhost nova_compute[280228]: vpindex Dec 5 04:49:01 localhost nova_compute[280228]: runtime Dec 5 04:49:01 localhost nova_compute[280228]: synic Dec 5 04:49:01 localhost nova_compute[280228]: stimer Dec 5 04:49:01 localhost nova_compute[280228]: reset Dec 5 04:49:01 localhost nova_compute[280228]: vendor_id Dec 5 04:49:01 localhost nova_compute[280228]: frequencies Dec 5 04:49:01 localhost nova_compute[280228]: reenlightenment Dec 5 04:49:01 localhost nova_compute[280228]: tlbflush Dec 5 04:49:01 localhost nova_compute[280228]: ipi Dec 5 04:49:01 localhost nova_compute[280228]: avic Dec 5 04:49:01 localhost nova_compute[280228]: emsr_bitmap Dec 5 04:49:01 localhost nova_compute[280228]: xmm_input Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: 4095 Dec 5 04:49:01 localhost nova_compute[280228]: on Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: off Dec 5 04:49:01 localhost nova_compute[280228]: Linux KVM Hv Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: tdx Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: Dec 5 04:49:01 localhost nova_compute[280228]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.346 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.346 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.346 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.346 280232 INFO nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Secure Boot support detected#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.348 280232 INFO nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.348 280232 INFO nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.358 280232 DEBUG nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.378 280232 INFO nova.virt.node [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Determined node identity 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from /var/lib/nova/compute_id#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.403 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Verified node 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 matches my host np0005546419.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.428 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.432 280232 DEBUG nova.virt.libvirt.vif [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T08:35:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005546419.localdomain',hostname='test',id=2,image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T08:35:39Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005546419.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e6ca8a92050741d3a93772e6c1b0d704',ramdisk_id='',reservation_id='r-99d0dddi',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-05T08:35:39Z,user_data=None,user_id='52d0a54dc45b4c4caaba721ba3202150',uuid=96a47a1c-57c7-4bb1-aecc-33db976db8c7,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.432 280232 DEBUG nova.network.os_vif_util [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Converting VIF {"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.433 280232 DEBUG nova.network.os_vif_util [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.433 280232 DEBUG os_vif [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.496 280232 DEBUG ovsdbapp.backend.ovs_idl [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.496 280232 DEBUG ovsdbapp.backend.ovs_idl [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.496 280232 DEBUG ovsdbapp.backend.ovs_idl [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.497 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.497 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.497 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.498 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.499 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.502 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.512 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.513 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.513 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:49:01 localhost nova_compute[280228]: 2025-12-05 09:49:01.514 280232 INFO oslo.privsep.daemon [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp1lng70i1/privsep.sock']#033[00m Dec 5 04:49:01 localhost python3.9[280394]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 5 04:49:01 localhost systemd[1]: Started libpod-conmon-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50.scope. Dec 5 04:49:01 localhost systemd[1]: Started libcrun container. Dec 5 04:49:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 5 04:49:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 5 04:49:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 5 04:49:01 localhost podman[280423]: 2025-12-05 09:49:01.893191163 +0000 UTC m=+0.161628340 container init cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 04:49:01 localhost podman[280423]: 2025-12-05 09:49:01.90132922 +0000 UTC m=+0.169766397 container start cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 5 04:49:01 localhost python3.9[280394]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Applying nova statedir ownership Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7 already 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7 to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/96a47a1c-57c7-4bb1-aecc-33db976db8c7/console.log Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/c71d7fb936a828c57128ce72f168800f43e32781 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-c71d7fb936a828c57128ce72f168800f43e32781 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 5 04:49:01 localhost nova_compute_init[280444]: INFO:nova_statedir:Nova statedir ownership complete Dec 5 04:49:01 localhost systemd[1]: libpod-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50.scope: Deactivated successfully. Dec 5 04:49:02 localhost podman[280456]: 2025-12-05 09:49:02.023058559 +0000 UTC m=+0.040962882 container died cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute_init) Dec 5 04:49:02 localhost podman[280456]: 2025-12-05 09:49:02.0508186 +0000 UTC m=+0.068722943 container cleanup cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Dec 5 04:49:02 localhost systemd[1]: libpod-conmon-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50.scope: Deactivated successfully. Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.113 280232 INFO oslo.privsep.daemon [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.020 280478 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.024 280478 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.026 280478 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.026 280478 INFO oslo.privsep.daemon [-] privsep daemon running as pid 280478#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.417 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.418 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2f95d81-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.418 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2f95d81-23, col_values=(('external_ids', {'iface-id': 'c2f95d81-2317-46b9-8146-596eac8f9acb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:e6:3a', 'vm-uuid': '96a47a1c-57c7-4bb1-aecc-33db976db8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.419 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.420 280232 INFO os_vif [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23')#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.420 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.424 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.425 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.513 280232 DEBUG oslo_concurrency.lockutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.514 280232 DEBUG oslo_concurrency.lockutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.514 280232 DEBUG oslo_concurrency.lockutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.515 280232 DEBUG nova.compute.resource_tracker [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.515 280232 DEBUG oslo_concurrency.processutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:49:02 localhost systemd[1]: var-lib-containers-storage-overlay-2e23e5f0e8d03624c8af001fbbfc14f4c044253041c1b764fabd86b74ae8deb7-merged.mount: Deactivated successfully. Dec 5 04:49:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cea458b303cfc5ad837b7cdf3f69a1675bd08b136608e0d5d60023ffbf39bc50-userdata-shm.mount: Deactivated successfully. Dec 5 04:49:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:49:02 localhost nova_compute[280228]: 2025-12-05 09:49:02.884 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:02 localhost systemd[1]: tmp-crun.jRbqyl.mount: Deactivated successfully. Dec 5 04:49:02 localhost podman[280520]: 2025-12-05 09:49:02.929090568 +0000 UTC m=+0.113389169 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:49:02 localhost systemd[1]: session-59.scope: Deactivated successfully. Dec 5 04:49:02 localhost systemd[1]: session-59.scope: Consumed 1min 28.723s CPU time. Dec 5 04:49:02 localhost systemd-logind[760]: Session 59 logged out. Waiting for processes to exit. Dec 5 04:49:02 localhost systemd-logind[760]: Removed session 59. Dec 5 04:49:02 localhost podman[280520]: 2025-12-05 09:49:02.948728184 +0000 UTC m=+0.133026794 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 04:49:02 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.027 280232 DEBUG oslo_concurrency.processutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.092 280232 DEBUG nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.092 280232 DEBUG nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.290 280232 WARNING nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.291 280232 DEBUG nova.compute.resource_tracker [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12165MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.291 280232 DEBUG oslo_concurrency.lockutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.291 280232 DEBUG oslo_concurrency.lockutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.404 280232 DEBUG nova.compute.resource_tracker [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.404 280232 DEBUG nova.compute.resource_tracker [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.405 280232 DEBUG nova.compute.resource_tracker [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.420 280232 DEBUG nova.scheduler.client.report [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.470 280232 DEBUG nova.scheduler.client.report [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.471 280232 DEBUG nova.compute.provider_tree [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.485 280232 DEBUG nova.scheduler.client.report [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.507 280232 DEBUG nova.scheduler.client.report [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:49:03 localhost nova_compute[280228]: 2025-12-05 09:49:03.550 280232 DEBUG oslo_concurrency.processutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:49:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:03.896 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:03.896 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:03.897 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.013 280232 DEBUG oslo_concurrency.processutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.018 280232 DEBUG nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 5 04:49:04 localhost nova_compute[280228]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.018 280232 INFO nova.virt.libvirt.host [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.020 280232 DEBUG nova.compute.provider_tree [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.021 280232 DEBUG nova.virt.libvirt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.040 280232 DEBUG nova.scheduler.client.report [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.075 280232 DEBUG nova.compute.resource_tracker [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.076 280232 DEBUG oslo_concurrency.lockutils [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.076 280232 DEBUG nova.service [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.102 280232 DEBUG nova.service [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 5 04:49:04 localhost nova_compute[280228]: 2025-12-05 09:49:04.103 280232 DEBUG nova.servicegroup.drivers.db [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] DB_Driver: join new ServiceGroup member np0005546419.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 5 04:49:06 localhost nova_compute[280228]: 2025-12-05 09:49:06.537 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:07 localhost nova_compute[280228]: 2025-12-05 09:49:07.883 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:49:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:49:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:49:09 localhost podman[280560]: 2025-12-05 09:49:09.231091655 +0000 UTC m=+0.119849604 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:49:09 localhost systemd[1]: tmp-crun.PQYLNW.mount: Deactivated successfully. Dec 5 04:49:09 localhost podman[280560]: 2025-12-05 09:49:09.250533224 +0000 UTC m=+0.139291153 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:49:09 localhost podman[280561]: 2025-12-05 09:49:09.25005155 +0000 UTC m=+0.135501517 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:49:09 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:49:09 localhost podman[280561]: 2025-12-05 09:49:09.335558832 +0000 UTC m=+0.221008799 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 04:49:09 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:49:09 localhost podman[280562]: 2025-12-05 09:49:09.340118129 +0000 UTC m=+0.222578726 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute) Dec 5 04:49:09 localhost podman[280562]: 2025-12-05 09:49:09.424768905 +0000 UTC m=+0.307229572 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:49:09 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:49:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41905 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B42470000000001030307) Dec 5 04:49:11 localhost nova_compute[280228]: 2025-12-05 09:49:11.578 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41906 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B46460000000001030307) Dec 5 04:49:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4333 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B48450000000001030307) Dec 5 04:49:12 localhost nova_compute[280228]: 2025-12-05 09:49:12.925 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41907 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B4E450000000001030307) Dec 5 04:49:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4394 DF PROTO=TCP SPT=35714 DPT=9102 SEQ=4110578144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B52460000000001030307) Dec 5 04:49:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:49:16 localhost systemd[1]: tmp-crun.7R7AaI.mount: Deactivated successfully. Dec 5 04:49:16 localhost podman[280616]: 2025-12-05 09:49:16.203196839 +0000 UTC m=+0.090670069 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:49:16 localhost podman[280616]: 2025-12-05 09:49:16.268515398 +0000 UTC m=+0.155988628 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Dec 5 04:49:16 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:49:16 localhost nova_compute[280228]: 2025-12-05 09:49:16.625 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:17 localhost nova_compute[280228]: 2025-12-05 09:49:17.927 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41908 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B5E050000000001030307) Dec 5 04:49:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:49:19 localhost podman[280727]: 2025-12-05 09:49:19.225220348 +0000 UTC m=+0.113375977 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:49:19 localhost podman[280727]: 2025-12-05 09:49:19.235387206 +0000 UTC m=+0.123542855 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:49:19 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:49:19 localhost podman[239519]: time="2025-12-05T09:49:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:49:19 localhost podman[239519]: @ - - [05/Dec/2025:09:49:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:49:19 localhost podman[239519]: @ - - [05/Dec/2025:09:49:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17222 "" "Go-http-client/1.1" Dec 5 04:49:21 localhost nova_compute[280228]: 2025-12-05 09:49:21.649 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:22 localhost nova_compute[280228]: 2025-12-05 09:49:22.953 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:26 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:26.413 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 04:49:26 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:26.414 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 04:49:26 localhost nova_compute[280228]: 2025-12-05 09:49:26.416 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41909 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5B7E450000000001030307) Dec 5 04:49:26 localhost nova_compute[280228]: 2025-12-05 09:49:26.684 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:27 localhost openstack_network_exporter[241668]: ERROR 09:49:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:49:27 localhost openstack_network_exporter[241668]: ERROR 09:49:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:49:27 localhost openstack_network_exporter[241668]: ERROR 09:49:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:49:27 localhost openstack_network_exporter[241668]: ERROR 09:49:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:49:27 localhost openstack_network_exporter[241668]: Dec 5 04:49:27 localhost openstack_network_exporter[241668]: ERROR 09:49:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:49:27 localhost openstack_network_exporter[241668]: Dec 5 04:49:27 localhost nova_compute[280228]: 2025-12-05 09:49:27.958 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:49:30 localhost podman[280751]: 2025-12-05 09:49:30.203610763 +0000 UTC m=+0.090401742 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6) Dec 5 04:49:30 localhost podman[280751]: 2025-12-05 09:49:30.243926694 +0000 UTC m=+0.130717653 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:49:30 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:49:31 localhost nova_compute[280228]: 2025-12-05 09:49:31.733 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:32 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:32.417 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:49:32 localhost nova_compute[280228]: 2025-12-05 09:49:32.986 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:49:33 localhost podman[280772]: 2025-12-05 09:49:33.186716242 +0000 UTC m=+0.074326084 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 04:49:33 localhost podman[280772]: 2025-12-05 09:49:33.225945891 +0000 UTC m=+0.113555723 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:49:33 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:49:36 localhost nova_compute[280228]: 2025-12-05 09:49:36.737 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:37 localhost nova_compute[280228]: 2025-12-05 09:49:37.989 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:49:40 localhost systemd[1]: tmp-crun.t87TTf.mount: Deactivated successfully. Dec 5 04:49:40 localhost podman[280791]: 2025-12-05 09:49:40.203141669 +0000 UTC m=+0.086472012 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:49:40 localhost podman[280791]: 2025-12-05 09:49:40.232968593 +0000 UTC m=+0.116298936 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent) Dec 5 04:49:40 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:49:40 localhost podman[280790]: 2025-12-05 09:49:40.248604197 +0000 UTC m=+0.134554899 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:49:40 localhost podman[280792]: 2025-12-05 09:49:40.307901675 +0000 UTC m=+0.186690120 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 04:49:40 localhost podman[280792]: 2025-12-05 09:49:40.316629419 +0000 UTC m=+0.195417924 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 04:49:40 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:49:40 localhost podman[280790]: 2025-12-05 09:49:40.335353956 +0000 UTC m=+0.221304698 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:49:40 localhost nova_compute[280228]: 2025-12-05 09:49:40.349 280232 DEBUG nova.compute.manager [None req-2ed83b5c-aecb-49f4-a23c-a12740f85393 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:49:40 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:49:40 localhost nova_compute[280228]: 2025-12-05 09:49:40.355 280232 INFO nova.compute.manager [None req-2ed83b5c-aecb-49f4-a23c-a12740f85393 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Retrieving diagnostics#033[00m Dec 5 04:49:41 localhost systemd[1]: tmp-crun.no1PBI.mount: Deactivated successfully. Dec 5 04:49:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59514 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BB7760000000001030307) Dec 5 04:49:41 localhost nova_compute[280228]: 2025-12-05 09:49:41.771 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59515 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BBB860000000001030307) Dec 5 04:49:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41910 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BBE450000000001030307) Dec 5 04:49:43 localhost nova_compute[280228]: 2025-12-05 09:49:43.037 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59516 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BC3850000000001030307) Dec 5 04:49:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4334 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=2369035308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BC6450000000001030307) Dec 5 04:49:45 localhost nova_compute[280228]: 2025-12-05 09:49:45.105 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:49:45 localhost nova_compute[280228]: 2025-12-05 09:49:45.124 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 04:49:45 localhost nova_compute[280228]: 2025-12-05 09:49:45.125 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:45 localhost nova_compute[280228]: 2025-12-05 09:49:45.125 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:45 localhost nova_compute[280228]: 2025-12-05 09:49:45.125 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:49:45 localhost nova_compute[280228]: 2025-12-05 09:49:45.146 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:46 localhost nova_compute[280228]: 2025-12-05 09:49:46.819 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:49:47 localhost podman[280852]: 2025-12-05 09:49:47.201697375 +0000 UTC m=+0.087002398 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 5 04:49:47 localhost podman[280852]: 2025-12-05 09:49:47.273715128 +0000 UTC m=+0.159020111 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:49:47 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:49:47 localhost nova_compute[280228]: 2025-12-05 09:49:47.559 280232 DEBUG oslo_concurrency.lockutils [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:47 localhost nova_compute[280228]: 2025-12-05 09:49:47.559 280232 DEBUG oslo_concurrency.lockutils [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:47 localhost nova_compute[280228]: 2025-12-05 09:49:47.560 280232 DEBUG nova.compute.manager [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:49:47 localhost nova_compute[280228]: 2025-12-05 09:49:47.566 280232 DEBUG nova.compute.manager [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m Dec 5 04:49:47 localhost nova_compute[280228]: 2025-12-05 09:49:47.570 280232 DEBUG nova.objects.instance [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'flavor' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:49:47 localhost nova_compute[280228]: 2025-12-05 09:49:47.676 280232 DEBUG nova.virt.libvirt.driver [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Dec 5 04:49:48 localhost nova_compute[280228]: 2025-12-05 09:49:48.039 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59517 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BD3450000000001030307) Dec 5 04:49:49 localhost podman[239519]: time="2025-12-05T09:49:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:49:49 localhost podman[239519]: @ - - [05/Dec/2025:09:49:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 5 04:49:49 localhost podman[239519]: @ - - [05/Dec/2025:09:49:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17219 "" "Go-http-client/1.1" Dec 5 04:49:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:49:50 localhost kernel: device tapc2f95d81-23 left promiscuous mode Dec 5 04:49:50 localhost NetworkManager[5960]: [1764928190.1615] device (tapc2f95d81-23): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 5 04:49:50 localhost podman[280878]: 2025-12-05 09:49:50.168078647 +0000 UTC m=+0.059670479 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00051|binding|INFO|Releasing lport c2f95d81-2317-46b9-8146-596eac8f9acb from this chassis (sb_readonly=0) Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00052|binding|INFO|Setting lport c2f95d81-2317-46b9-8146-596eac8f9acb down in Southbound Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00053|binding|INFO|Removing iface tapc2f95d81-23 ovn-installed in OVS Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.186 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully. Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.190 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:e6:3a 192.168.0.214'], port_security=['fa:16:3e:04:e6:3a 192.168.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.214/24', 'neutron:device_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005546419.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1af67ae0-d372-40b9-b93c-60c041b7465b 80b85888-ac92-454c-bb81-84292c7ac789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=961a9491-8d79-4baf-950b-57a666c30c22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=c2f95d81-2317-46b9-8146-596eac8f9acb) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.191 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c2f95d81-2317-46b9-8146-596eac8f9acb in datapath 86f5c13f-3cf8-4808-86c3-060f6b38ab5b unbound from our chassis#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.192 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 86f5c13f-3cf8-4808-86c3-060f6b38ab5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 04:49:50 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 3min 53.905s CPU time. Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00054|ovn_bfd|INFO|Disabled BFD on interface ovn-40c64e-0 Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00055|ovn_bfd|INFO|Disabled BFD on interface ovn-473cc8-0 Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00056|ovn_bfd|INFO|Disabled BFD on interface ovn-f5bb44-0 Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.194 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00057|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:49:50 localhost systemd-machined[83348]: Machine qemu-1-instance-00000002 terminated. Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.198 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[200bec85-151a-49cc-8d19-6c538c7fe70e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.198 158820 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b namespace which is not needed anymore#033[00m Dec 5 04:49:50 localhost podman[280878]: 2025-12-05 09:49:50.203301305 +0000 UTC m=+0.094893117 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:49:50 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:49:50 localhost ovn_controller[153000]: 2025-12-05T09:49:50Z|00058|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.238 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.240 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost systemd[1]: libpod-2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7.scope: Deactivated successfully. Dec 5 04:49:50 localhost podman[280925]: 2025-12-05 09:49:50.35128798 +0000 UTC m=+0.054843044 container died 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1) Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.373 280232 DEBUG nova.compute.manager [req-38a92059-f10f-4777-af3c-160a3fc8b996 req-ab8a107d-5aea-4e7e-a0fe-9eeb85918a2b c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received event network-vif-unplugged-c2f95d81-2317-46b9-8146-596eac8f9acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.374 280232 DEBUG oslo_concurrency.lockutils [req-38a92059-f10f-4777-af3c-160a3fc8b996 req-ab8a107d-5aea-4e7e-a0fe-9eeb85918a2b c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.374 280232 DEBUG oslo_concurrency.lockutils [req-38a92059-f10f-4777-af3c-160a3fc8b996 req-ab8a107d-5aea-4e7e-a0fe-9eeb85918a2b c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.375 280232 DEBUG oslo_concurrency.lockutils [req-38a92059-f10f-4777-af3c-160a3fc8b996 req-ab8a107d-5aea-4e7e-a0fe-9eeb85918a2b c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.375 280232 DEBUG nova.compute.manager [req-38a92059-f10f-4777-af3c-160a3fc8b996 req-ab8a107d-5aea-4e7e-a0fe-9eeb85918a2b c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] No waiting events found dispatching network-vif-unplugged-c2f95d81-2317-46b9-8146-596eac8f9acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.375 280232 WARNING nova.compute.manager [req-38a92059-f10f-4777-af3c-160a3fc8b996 req-ab8a107d-5aea-4e7e-a0fe-9eeb85918a2b c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received unexpected event network-vif-unplugged-c2f95d81-2317-46b9-8146-596eac8f9acb for instance with vm_state active and task_state powering-off.#033[00m Dec 5 04:49:50 localhost NetworkManager[5960]: [1764928190.3894] manager: (tapc2f95d81-23): new Tun device (/org/freedesktop/NetworkManager/Devices/15) Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.391 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.396 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost podman[280925]: 2025-12-05 09:49:50.499748579 +0000 UTC m=+0.203303643 container cleanup 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 5 04:49:50 localhost podman[280939]: 2025-12-05 09:49:50.516161977 +0000 UTC m=+0.161923419 container cleanup 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 5 04:49:50 localhost systemd[1]: libpod-conmon-2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7.scope: Deactivated successfully. Dec 5 04:49:50 localhost podman[280966]: 2025-12-05 09:49:50.574585437 +0000 UTC m=+0.057535154 container remove 2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, tcib_managed=true, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.579 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[36284f5e-67a9-4ba6-8b43-47102fdb61d6]: (4, ('Fri Dec 5 09:49:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b (2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7)\n2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7\nFri Dec 5 09:49:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b (2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7)\n2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.581 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[944b6b09-882c-4f90-a8d5-df6a02a90579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.582 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f5c13f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.622 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost kernel: device tap86f5c13f-30 left promiscuous mode Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.630 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.632 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.635 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[1c79932a-f590-4f40-8683-481f735ed718]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.654 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9760fd-a832-4d22-bed9-d3c975b77b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.656 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5fc210-7cca-45c7-93e8-6eb273b55810]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.667 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[067d2e64-1280-4f7b-8b65-eb1fabdfb7f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685355, 'reachable_time': 20191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280988, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.675 158957 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 5 04:49:50 localhost ovn_metadata_agent[158815]: 2025-12-05 09:49:50.675 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbe1828-619c-45db-bec8-969b559de061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.697 280232 INFO nova.virt.libvirt.driver [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Instance shutdown successfully after 3 seconds.#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.702 280232 INFO nova.virt.libvirt.driver [-] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Instance destroyed successfully.#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.703 280232 DEBUG nova.objects.instance [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'numa_topology' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.716 280232 DEBUG nova.compute.manager [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:49:50 localhost nova_compute[280228]: 2025-12-05 09:49:50.789 280232 DEBUG oslo_concurrency.lockutils [None req-e825f2a5-771d-4221-8d0c-89e8b364e187 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: held 3.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:51 localhost systemd[1]: var-lib-containers-storage-overlay-c1b4fea30ca10e6234bb45ca683bec863e17e57041e353e748a5745e23567836-merged.mount: Deactivated successfully. Dec 5 04:49:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2292a1d1eb29b1298cc792b219585955010c7b81ff4c5c21fe71f893fe70cfb7-userdata-shm.mount: Deactivated successfully. Dec 5 04:49:51 localhost systemd[1]: run-netns-ovnmeta\x2d86f5c13f\x2d3cf8\x2d4808\x2d86c3\x2d060f6b38ab5b.mount: Deactivated successfully. Dec 5 04:49:51 localhost nova_compute[280228]: 2025-12-05 09:49:51.844 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.350 280232 DEBUG nova.compute.manager [None req-12e88e0b-1e33-49b8-9bbf-af5ec039f084 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.955 280232 DEBUG nova.compute.manager [req-9d333819-2dfa-41af-8cae-783f7c368343 req-626f2095-3ca2-4139-8e15-8632b325e953 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received event network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.956 280232 DEBUG oslo_concurrency.lockutils [req-9d333819-2dfa-41af-8cae-783f7c368343 req-626f2095-3ca2-4139-8e15-8632b325e953 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.956 280232 DEBUG oslo_concurrency.lockutils [req-9d333819-2dfa-41af-8cae-783f7c368343 req-626f2095-3ca2-4139-8e15-8632b325e953 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.956 280232 DEBUG oslo_concurrency.lockutils [req-9d333819-2dfa-41af-8cae-783f7c368343 req-626f2095-3ca2-4139-8e15-8632b325e953 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.957 280232 DEBUG nova.compute.manager [req-9d333819-2dfa-41af-8cae-783f7c368343 req-626f2095-3ca2-4139-8e15-8632b325e953 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] No waiting events found dispatching network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.957 280232 WARNING nova.compute.manager [req-9d333819-2dfa-41af-8cae-783f7c368343 req-626f2095-3ca2-4139-8e15-8632b325e953 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received unexpected event network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb for instance with vm_state stopped and task_state None.#033[00m Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server [None req-12e88e0b-1e33-49b8-9bbf-af5ec039f084 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server raise self.value Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server raise self.value Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 5 04:49:52 localhost nova_compute[280228]: 2025-12-05 09:49:52.966 280232 ERROR oslo_messaging.rpc.server #033[00m Dec 5 04:49:53 localhost nova_compute[280228]: 2025-12-05 09:49:53.070 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59518 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5BF4460000000001030307) Dec 5 04:49:56 localhost nova_compute[280228]: 2025-12-05 09:49:56.889 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:49:57 localhost openstack_network_exporter[241668]: ERROR 09:49:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:49:57 localhost openstack_network_exporter[241668]: ERROR 09:49:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:49:57 localhost openstack_network_exporter[241668]: ERROR 09:49:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:49:57 localhost openstack_network_exporter[241668]: ERROR 09:49:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:49:57 localhost openstack_network_exporter[241668]: Dec 5 04:49:57 localhost openstack_network_exporter[241668]: ERROR 09:49:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:49:57 localhost openstack_network_exporter[241668]: Dec 5 04:49:58 localhost nova_compute[280228]: 2025-12-05 09:49:58.072 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:00 localhost nova_compute[280228]: 2025-12-05 09:50:00.548 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:00 localhost nova_compute[280228]: 2025-12-05 09:50:00.549 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:00 localhost nova_compute[280228]: 2025-12-05 09:50:00.549 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:50:00 localhost nova_compute[280228]: 2025-12-05 09:50:00.549 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:50:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:50:01 localhost systemd[1]: tmp-crun.iPApr9.mount: Deactivated successfully. Dec 5 04:50:01 localhost podman[280990]: 2025-12-05 09:50:01.209482341 +0000 UTC m=+0.093858786 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 5 04:50:01 localhost podman[280990]: 2025-12-05 09:50:01.253732663 +0000 UTC m=+0.138109098 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:50:01 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.379 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.379 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.380 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.380 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.827 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.845 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.846 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.846 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.847 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.847 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.848 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.848 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.849 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.849 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.850 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.869 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.869 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.870 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.870 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.871 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:50:01 localhost nova_compute[280228]: 2025-12-05 09:50:01.921 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.337 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.398 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.399 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.607 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.608 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12580MB free_disk=41.837059020996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.609 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.610 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.671 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.671 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.672 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:50:02 localhost nova_compute[280228]: 2025-12-05 09:50:02.706 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:50:03 localhost nova_compute[280228]: 2025-12-05 09:50:03.115 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:03 localhost nova_compute[280228]: 2025-12-05 09:50:03.152 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:50:03 localhost nova_compute[280228]: 2025-12-05 09:50:03.160 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:50:03 localhost nova_compute[280228]: 2025-12-05 09:50:03.206 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:50:03 localhost nova_compute[280228]: 2025-12-05 09:50:03.230 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:50:03 localhost nova_compute[280228]: 2025-12-05 09:50:03.230 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:50:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:03.896 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:50:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:03.897 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:50:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:03.897 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:50:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:50:04 localhost podman[281054]: 2025-12-05 09:50:04.192046815 +0000 UTC m=+0.083746439 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:50:04 localhost podman[281054]: 2025-12-05 09:50:04.20903975 +0000 UTC m=+0.100739344 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible) Dec 5 04:50:04 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:50:05 localhost nova_compute[280228]: 2025-12-05 09:50:05.410 280232 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 04:50:05 localhost nova_compute[280228]: 2025-12-05 09:50:05.410 280232 INFO nova.compute.manager [-] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] VM Stopped (Lifecycle Event)#033[00m Dec 5 04:50:05 localhost nova_compute[280228]: 2025-12-05 09:50:05.427 280232 DEBUG nova.compute.manager [None req-8d2bf035-2b57-444a-960a-5152b8bcb182 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:50:05 localhost nova_compute[280228]: 2025-12-05 09:50:05.434 280232 DEBUG nova.compute.manager [None req-8d2bf035-2b57-444a-960a-5152b8bcb182 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 04:50:06 localhost nova_compute[280228]: 2025-12-05 09:50:06.964 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:08 localhost nova_compute[280228]: 2025-12-05 09:50:08.118 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.506 280232 DEBUG nova.compute.manager [None req-b105b54a-684c-4298-afbf-5558d1e7e2a5 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server [None req-b105b54a-684c-4298-afbf-5558d1e7e2a5 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server raise self.value Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server raise self.value Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 5 04:50:10 localhost nova_compute[280228]: 2025-12-05 09:50:10.526 280232 ERROR oslo_messaging.rpc.server #033[00m Dec 5 04:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:50:11 localhost podman[281073]: 2025-12-05 09:50:11.212509486 +0000 UTC m=+0.093085962 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Dec 5 04:50:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35992 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C2CA70000000001030307) Dec 5 04:50:11 localhost podman[281072]: 2025-12-05 09:50:11.180434184 +0000 UTC m=+0.071746086 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:50:11 localhost podman[281071]: 2025-12-05 09:50:11.235804372 +0000 UTC m=+0.125902626 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:50:11 localhost podman[281071]: 2025-12-05 09:50:11.248299441 +0000 UTC m=+0.138397675 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:50:11 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:50:11 localhost podman[281073]: 2025-12-05 09:50:11.29674687 +0000 UTC m=+0.177323306 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 5 04:50:11 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:50:11 localhost podman[281072]: 2025-12-05 09:50:11.316937511 +0000 UTC m=+0.208249463 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:50:11 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:50:12 localhost nova_compute[280228]: 2025-12-05 09:50:12.012 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:12 localhost systemd[1]: tmp-crun.OsEFmV.mount: Deactivated successfully. Dec 5 04:50:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35993 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C30C60000000001030307) Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.942 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.943 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.945 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.946 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.946 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.947 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of memory.usage: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.947 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.948 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.948 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.949 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.949 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.950 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.950 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.951 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.951 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.952 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.952 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.953 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.953 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.954 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.954 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.955 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.955 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.956 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.956 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.957 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.957 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.958 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.958 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.959 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.usage: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.959 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.960 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.960 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.960 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.961 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.961 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.961 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.962 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of cpu: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.962 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.963 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.963 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.964 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.965 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.965 12 DEBUG ceilometer.compute.pollsters [-] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 5 04:50:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:50:12.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:50:13 localhost nova_compute[280228]: 2025-12-05 09:50:13.156 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59519 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C34450000000001030307) Dec 5 04:50:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35994 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C38C50000000001030307) Dec 5 04:50:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41911 DF PROTO=TCP SPT=53290 DPT=9102 SEQ=3969074191 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C3C450000000001030307) Dec 5 04:50:16 localhost nova_compute[280228]: 2025-12-05 09:50:16.624 280232 DEBUG nova.objects.instance [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'flavor' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:16 localhost nova_compute[280228]: 2025-12-05 09:50:16.646 280232 DEBUG oslo_concurrency.lockutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:50:16 localhost nova_compute[280228]: 2025-12-05 09:50:16.647 280232 DEBUG oslo_concurrency.lockutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:50:16 localhost nova_compute[280228]: 2025-12-05 09:50:16.647 280232 DEBUG nova.network.neutron [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 5 04:50:16 localhost nova_compute[280228]: 2025-12-05 09:50:16.647 280232 DEBUG nova.objects.instance [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:17 localhost nova_compute[280228]: 2025-12-05 09:50:17.050 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.159 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:18 localhost podman[281132]: 2025-12-05 09:50:18.20305386 +0000 UTC m=+0.089415651 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 04:50:18 localhost podman[281132]: 2025-12-05 09:50:18.268801522 +0000 UTC m=+0.155163293 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 5 04:50:18 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:50:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35995 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C48850000000001030307) Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.576 280232 DEBUG nova.network.neutron [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.604 280232 DEBUG oslo_concurrency.lockutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.632 280232 INFO nova.virt.libvirt.driver [-] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Instance destroyed successfully.#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.633 280232 DEBUG nova.objects.instance [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'numa_topology' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.651 280232 DEBUG nova.objects.instance [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'resources' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.667 280232 DEBUG nova.virt.libvirt.vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T08:35:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005546419.localdomain',hostname='test',id=2,image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T08:35:39Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='e6ca8a92050741d3a93772e6c1b0d704',ramdisk_id='',reservation_id='r-99d0dddi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-05T09:49:50Z,user_data=None,user_id='52d0a54dc45b4c4caaba721ba3202150',uuid=96a47a1c-57c7-4bb1-aecc-33db976db8c7,vcpu_model=,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.668 280232 DEBUG nova.network.os_vif_util [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Converting VIF {"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.669 280232 DEBUG nova.network.os_vif_util [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.669 280232 DEBUG os_vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.672 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.673 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2f95d81-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.676 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.679 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.681 280232 INFO os_vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23')#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.684 280232 DEBUG nova.virt.libvirt.host [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.685 280232 INFO nova.virt.libvirt.host [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] UEFI support detected#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.692 280232 DEBUG nova.virt.libvirt.driver [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Start _get_guest_xml network_info=[{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=e7469c27-9043-4bd0-b0a4-5b489dcf3ae2,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_type': 'disk', 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'image_id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}], 'ephemerals': [{'encryption_secret_uuid': None, 'device_type': 'disk', 'size': 1, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.696 280232 WARNING nova.virt.libvirt.driver [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.699 280232 DEBUG nova.virt.libvirt.host [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Searching host: 'np0005546419.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.700 280232 DEBUG nova.virt.libvirt.host [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.702 280232 DEBUG nova.virt.libvirt.host [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Searching host: 'np0005546419.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.703 280232 DEBUG nova.virt.libvirt.host [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.703 280232 DEBUG nova.virt.libvirt.driver [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.704 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T08:34:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bb6181df-1ada-42c2-81f6-896f08302073',id=2,is_public=True,memory_mb=512,name='m1.small',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=e7469c27-9043-4bd0-b0a4-5b489dcf3ae2,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.705 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.705 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.705 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.706 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.706 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.706 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.707 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.707 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.708 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.708 280232 DEBUG nova.virt.hardware [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.709 280232 DEBUG nova.objects.instance [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.726 280232 DEBUG nova.privsep.utils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 5 04:50:18 localhost nova_compute[280228]: 2025-12-05 09:50:18.727 280232 DEBUG oslo_concurrency.processutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.181 280232 DEBUG oslo_concurrency.processutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.183 280232 DEBUG oslo_concurrency.processutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.650 280232 DEBUG oslo_concurrency.processutils [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.652 280232 DEBUG nova.virt.libvirt.vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T08:35:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005546419.localdomain',hostname='test',id=2,image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T08:35:39Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='e6ca8a92050741d3a93772e6c1b0d704',ramdisk_id='',reservation_id='r-99d0dddi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-05T09:49:50Z,user_data=None,user_id='52d0a54dc45b4c4caaba721ba3202150',uuid=96a47a1c-57c7-4bb1-aecc-33db976db8c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.653 280232 DEBUG nova.network.os_vif_util [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Converting VIF {"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.654 280232 DEBUG nova.network.os_vif_util [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.659 280232 DEBUG nova.objects.instance [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.679 280232 DEBUG nova.virt.libvirt.driver [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] End _get_guest_xml xml= Dec 5 04:50:19 localhost nova_compute[280228]: 96a47a1c-57c7-4bb1-aecc-33db976db8c7 Dec 5 04:50:19 localhost nova_compute[280228]: instance-00000002 Dec 5 04:50:19 localhost nova_compute[280228]: 524288 Dec 5 04:50:19 localhost nova_compute[280228]: 1 Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: test Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:18 Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: 512 Dec 5 04:50:19 localhost nova_compute[280228]: 1 Dec 5 04:50:19 localhost nova_compute[280228]: 0 Dec 5 04:50:19 localhost nova_compute[280228]: 1 Dec 5 04:50:19 localhost nova_compute[280228]: 1 Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: admin Dec 5 04:50:19 localhost nova_compute[280228]: admin Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: RDO Dec 5 04:50:19 localhost nova_compute[280228]: OpenStack Compute Dec 5 04:50:19 localhost nova_compute[280228]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 5 04:50:19 localhost nova_compute[280228]: 96a47a1c-57c7-4bb1-aecc-33db976db8c7 Dec 5 04:50:19 localhost nova_compute[280228]: 96a47a1c-57c7-4bb1-aecc-33db976db8c7 Dec 5 04:50:19 localhost nova_compute[280228]: Virtual Machine Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: hvm Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: /dev/urandom Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: Dec 5 04:50:19 localhost nova_compute[280228]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.680 280232 DEBUG nova.virt.libvirt.driver [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.680 280232 DEBUG nova.virt.libvirt.driver [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.682 280232 DEBUG nova.virt.libvirt.vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T08:35:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005546419.localdomain',hostname='test',id=2,image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T08:35:39Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=,power_state=4,progress=0,project_id='e6ca8a92050741d3a93772e6c1b0d704',ramdisk_id='',reservation_id='r-99d0dddi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='e7469c27-9043-4bd0-b0a4-5b489dcf3ae2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-05T09:49:50Z,user_data=None,user_id='52d0a54dc45b4c4caaba721ba3202150',uuid=96a47a1c-57c7-4bb1-aecc-33db976db8c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.682 280232 DEBUG nova.network.os_vif_util [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Converting VIF {"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.683 280232 DEBUG nova.network.os_vif_util [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.684 280232 DEBUG os_vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.685 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.685 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.686 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.689 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.689 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2f95d81-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.690 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2f95d81-23, col_values=(('external_ids', {'iface-id': 'c2f95d81-2317-46b9-8146-596eac8f9acb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:e6:3a', 'vm-uuid': '96a47a1c-57c7-4bb1-aecc-33db976db8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.692 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.695 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.699 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.700 280232 INFO os_vif [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:e6:3a,bridge_name='br-int',has_traffic_filtering=True,id=c2f95d81-2317-46b9-8146-596eac8f9acb,network=Network(86f5c13f-3cf8-4808-86c3-060f6b38ab5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2f95d81-23')#033[00m Dec 5 04:50:19 localhost systemd[1]: Started libvirt secret daemon. Dec 5 04:50:19 localhost kernel: device tapc2f95d81-23 entered promiscuous mode Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00059|binding|INFO|Claiming lport c2f95d81-2317-46b9-8146-596eac8f9acb for this chassis. Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00060|binding|INFO|c2f95d81-2317-46b9-8146-596eac8f9acb: Claiming fa:16:3e:04:e6:3a 192.168.0.214 Dec 5 04:50:19 localhost NetworkManager[5960]: [1764928219.7959] manager: (tapc2f95d81-23): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Dec 5 04:50:19 localhost systemd-udevd[281297]: Network interface NamePolicy= disabled on kernel command line. Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.793 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.804 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.811 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:e6:3a 192.168.0.214'], port_security=['fa:16:3e:04:e6:3a 192.168.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.214/24', 'neutron:device_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1af67ae0-d372-40b9-b93c-60c041b7465b 80b85888-ac92-454c-bb81-84292c7ac789', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=961a9491-8d79-4baf-950b-57a666c30c22, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=c2f95d81-2317-46b9-8146-596eac8f9acb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 04:50:19 localhost NetworkManager[5960]: [1764928219.8144] device (tapc2f95d81-23): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.814 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c2f95d81-2317-46b9-8146-596eac8f9acb in datapath 86f5c13f-3cf8-4808-86c3-060f6b38ab5b bound to our chassis#033[00m Dec 5 04:50:19 localhost NetworkManager[5960]: [1764928219.8154] device (tapc2f95d81-23): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00061|ovn_bfd|INFO|Enabled BFD on interface ovn-40c64e-0 Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00062|ovn_bfd|INFO|Enabled BFD on interface ovn-473cc8-0 Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00063|ovn_bfd|INFO|Enabled BFD on interface ovn-f5bb44-0 Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.817 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.818 158820 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86f5c13f-3cf8-4808-86c3-060f6b38ab5b#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.832 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8a3f58-369b-4ea5-8636-b7cd8f3ccdc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.833 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap86f5c13f-31 in ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.836 158926 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap86f5c13f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.836 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2e0e69-27dd-42d6-beeb-737a349d9bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.839 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3adf62-0085-4f63-9c84-36815ce5eb6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.851 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.854 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[920529c4-87d8-4a63-a2f2-a95f75442fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.856 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.867 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost podman[239519]: time="2025-12-05T09:50:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.870 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00064|binding|INFO|Setting lport c2f95d81-2317-46b9-8146-596eac8f9acb ovn-installed in OVS Dec 5 04:50:19 localhost ovn_controller[153000]: 2025-12-05T09:50:19Z|00065|binding|INFO|Setting lport c2f95d81-2317-46b9-8146-596eac8f9acb up in Southbound Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.875 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf1ef41-fc1a-4705-b8e0-377c99e03af6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost systemd-machined[83348]: New machine qemu-2-instance-00000002. Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.882 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000002. Dec 5 04:50:19 localhost podman[239519]: @ - - [05/Dec/2025:09:50:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146549 "" "Go-http-client/1.1" Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.902 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d2b20e-8937-4109-a04a-ee26c1c812bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.907 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0e7062-e7e9-4088-ba60-f53e32d3d576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost systemd-udevd[281299]: Network interface NamePolicy= disabled on kernel command line. Dec 5 04:50:19 localhost NetworkManager[5960]: [1764928219.9153] manager: (tap86f5c13f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Dec 5 04:50:19 localhost nova_compute[280228]: 2025-12-05 09:50:19.915 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:19 localhost podman[239519]: @ - - [05/Dec/2025:09:50:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16743 "" "Go-http-client/1.1" Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.951 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[36b8bf4d-1b40-4961-bd2b-3ab7ef53dfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.955 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[ea14f56b-1a4a-417b-a962-21df872d5f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost NetworkManager[5960]: [1764928219.9759] device (tap86f5c13f-30): carrier: link connected Dec 5 04:50:19 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap86f5c13f-31: link becomes ready Dec 5 04:50:19 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap86f5c13f-30: link becomes ready Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.980 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[d21bfd05-f41f-46d7-b5e5-0dc207d77bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:19 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:19.997 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[016b185a-fad8-45f2-8bfe-49d0df412f52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f5c13f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0c:1c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1133507, 'reachable_time': 26146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281335, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.009 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2e8b22-f282-4393-af54-71d8686d7351]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:1cd5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1133507, 'tstamp': 1133507}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281336, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.023 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d5dba190-012a-46c1-bb7a-e2570d35fde3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f5c13f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0c:1c:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1133507, 'reachable_time': 26146, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281337, 'error': None, 'target': 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.050 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a5fd242a-b108-405b-96b0-c21665a1c7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.098 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4f009e-f6b6-49b3-ac16-a56a3bbd2d73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.099 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f5c13f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.099 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.100 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86f5c13f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.101 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:20 localhost kernel: device tap86f5c13f-30 entered promiscuous mode Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.104 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86f5c13f-30, col_values=(('external_ids', {'iface-id': '6eec4798-2413-4eda-86b7-a390f3150ec8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 04:50:20 localhost ovn_controller[153000]: 2025-12-05T09:50:20Z|00066|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.105 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.106 158820 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.108 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.108 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a132b92b-ebe9-4658-8832-b7bd978d9ce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.110 158820 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: global Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: log /dev/log local0 debug Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: log-tag haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: user root Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: group root Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: maxconn 1024 Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: pidfile /var/lib/neutron/external/pids/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.pid.haproxy Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: daemon Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: defaults Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: log global Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: mode http Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: option httplog Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: option dontlognull Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: option http-server-close Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: option forwardfor Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: retries 3 Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: timeout http-request 30s Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: timeout connect 30s Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: timeout client 32s Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: timeout server 32s Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: timeout http-keep-alive 30s Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: listen listener Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: bind 169.254.169.254:80 Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: server metadata /var/lib/neutron/metadata_proxy Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: http-request add-header X-OVN-Network-ID 86f5c13f-3cf8-4808-86c3-060f6b38ab5b Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 5 04:50:20 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:20.111 158820 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'env', 'PROCESS_TAG=haproxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/86f5c13f-3cf8-4808-86c3-060f6b38ab5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.220 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.221 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] VM Resumed (Lifecycle Event)#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.227 280232 DEBUG nova.compute.manager [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.229 280232 INFO nova.virt.libvirt.driver [-] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Instance rebooted successfully.#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.229 280232 DEBUG nova.compute.manager [None req-1ded1570-3964-4aa0-aaed-18ea66355f88 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.255 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.257 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.278 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.279 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.279 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] VM Started (Lifecycle Event)#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.308 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.311 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 04:50:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.497 280232 DEBUG nova.compute.manager [req-235badb3-f9e1-4252-928e-b897ae641fa5 req-1046a3e8-76a3-4f23-ba1e-2e8a29c536b2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received event network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.497 280232 DEBUG oslo_concurrency.lockutils [req-235badb3-f9e1-4252-928e-b897ae641fa5 req-1046a3e8-76a3-4f23-ba1e-2e8a29c536b2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.500 280232 DEBUG oslo_concurrency.lockutils [req-235badb3-f9e1-4252-928e-b897ae641fa5 req-1046a3e8-76a3-4f23-ba1e-2e8a29c536b2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.501 280232 DEBUG oslo_concurrency.lockutils [req-235badb3-f9e1-4252-928e-b897ae641fa5 req-1046a3e8-76a3-4f23-ba1e-2e8a29c536b2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.501 280232 DEBUG nova.compute.manager [req-235badb3-f9e1-4252-928e-b897ae641fa5 req-1046a3e8-76a3-4f23-ba1e-2e8a29c536b2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] No waiting events found dispatching network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.502 280232 WARNING nova.compute.manager [req-235badb3-f9e1-4252-928e-b897ae641fa5 req-1046a3e8-76a3-4f23-ba1e-2e8a29c536b2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received unexpected event network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb for instance with vm_state active and task_state None.#033[00m Dec 5 04:50:20 localhost systemd[1]: tmp-crun.PSdPH5.mount: Deactivated successfully. Dec 5 04:50:20 localhost podman[281425]: 2025-12-05 09:50:20.573091039 +0000 UTC m=+0.094399402 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:50:20 localhost podman[281425]: 2025-12-05 09:50:20.581021139 +0000 UTC m=+0.102329492 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:50:20 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:50:20 localhost podman[281436]: Dec 5 04:50:20 localhost podman[281436]: 2025-12-05 09:50:20.608962726 +0000 UTC m=+0.114831871 container create dc715794c4a485ab5d1cb6d29bde8c33a3cf12c9b75c24f2758af1922a808e2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 04:50:20 localhost systemd[1]: Started libpod-conmon-dc715794c4a485ab5d1cb6d29bde8c33a3cf12c9b75c24f2758af1922a808e2e.scope. Dec 5 04:50:20 localhost systemd[1]: Started libcrun container. Dec 5 04:50:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9043ab55eb5130d27d6c6f25fac8f73a85f8b4aab799e4eb2cc1e903f7863f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 04:50:20 localhost podman[281436]: 2025-12-05 09:50:20.554243057 +0000 UTC m=+0.060112232 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 04:50:20 localhost podman[281436]: 2025-12-05 09:50:20.657994792 +0000 UTC m=+0.163863937 container init dc715794c4a485ab5d1cb6d29bde8c33a3cf12c9b75c24f2758af1922a808e2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 5 04:50:20 localhost podman[281436]: 2025-12-05 09:50:20.666655184 +0000 UTC m=+0.172524339 container start dc715794c4a485ab5d1cb6d29bde8c33a3cf12c9b75c24f2758af1922a808e2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:50:20 localhost neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281465]: [NOTICE] (281469) : New worker (281471) forked Dec 5 04:50:20 localhost neutron-haproxy-ovnmeta-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281465]: [NOTICE] (281469) : Loading success. Dec 5 04:50:20 localhost ovn_controller[153000]: 2025-12-05T09:50:20Z|00067|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:50:20 localhost nova_compute[280228]: 2025-12-05 09:50:20.820 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:20 localhost ovn_controller[153000]: 2025-12-05T09:50:20Z|00068|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:50:21 localhost nova_compute[280228]: 2025-12-05 09:50:21.601 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:21 localhost ovn_controller[153000]: 2025-12-05T09:50:21Z|00069|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:50:21 localhost nova_compute[280228]: 2025-12-05 09:50:21.705 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:21 localhost ovn_controller[153000]: 2025-12-05T09:50:21Z|00070|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 04:50:22 localhost snmpd[66746]: IfIndex of an interface changed. Such interfaces will appear multiple times in IF-MIB. Dec 5 04:50:22 localhost nova_compute[280228]: 2025-12-05 09:50:22.699 280232 DEBUG nova.compute.manager [req-3822a025-dab0-41b7-97e2-94c1989b93c4 req-fcdc1f8e-d4c7-4953-925c-7d0717326a83 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received event network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 04:50:22 localhost nova_compute[280228]: 2025-12-05 09:50:22.702 280232 DEBUG oslo_concurrency.lockutils [req-3822a025-dab0-41b7-97e2-94c1989b93c4 req-fcdc1f8e-d4c7-4953-925c-7d0717326a83 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:50:22 localhost nova_compute[280228]: 2025-12-05 09:50:22.703 280232 DEBUG oslo_concurrency.lockutils [req-3822a025-dab0-41b7-97e2-94c1989b93c4 req-fcdc1f8e-d4c7-4953-925c-7d0717326a83 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:50:22 localhost nova_compute[280228]: 2025-12-05 09:50:22.704 280232 DEBUG oslo_concurrency.lockutils [req-3822a025-dab0-41b7-97e2-94c1989b93c4 req-fcdc1f8e-d4c7-4953-925c-7d0717326a83 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:50:22 localhost nova_compute[280228]: 2025-12-05 09:50:22.705 280232 DEBUG nova.compute.manager [req-3822a025-dab0-41b7-97e2-94c1989b93c4 req-fcdc1f8e-d4c7-4953-925c-7d0717326a83 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] No waiting events found dispatching network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 04:50:22 localhost nova_compute[280228]: 2025-12-05 09:50:22.705 280232 WARNING nova.compute.manager [req-3822a025-dab0-41b7-97e2-94c1989b93c4 req-fcdc1f8e-d4c7-4953-925c-7d0717326a83 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Received unexpected event network-vif-plugged-c2f95d81-2317-46b9-8146-596eac8f9acb for instance with vm_state active and task_state None.#033[00m Dec 5 04:50:23 localhost nova_compute[280228]: 2025-12-05 09:50:23.205 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:24 localhost nova_compute[280228]: 2025-12-05 09:50:24.693 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35996 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5C68450000000001030307) Dec 5 04:50:27 localhost openstack_network_exporter[241668]: ERROR 09:50:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:50:27 localhost openstack_network_exporter[241668]: ERROR 09:50:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:50:27 localhost openstack_network_exporter[241668]: ERROR 09:50:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:50:27 localhost openstack_network_exporter[241668]: ERROR 09:50:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:50:27 localhost openstack_network_exporter[241668]: Dec 5 04:50:27 localhost openstack_network_exporter[241668]: ERROR 09:50:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:50:27 localhost openstack_network_exporter[241668]: Dec 5 04:50:28 localhost nova_compute[280228]: 2025-12-05 09:50:28.235 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:29 localhost nova_compute[280228]: 2025-12-05 09:50:29.694 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:31 localhost ovn_controller[153000]: 2025-12-05T09:50:31Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:e6:3a 192.168.0.214 Dec 5 04:50:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:50:32 localhost podman[281481]: 2025-12-05 09:50:32.243049422 +0000 UTC m=+0.129578569 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm) Dec 5 04:50:32 localhost podman[281481]: 2025-12-05 09:50:32.282338383 +0000 UTC m=+0.168867610 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 04:50:32 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:50:33 localhost nova_compute[280228]: 2025-12-05 09:50:33.261 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:34 localhost nova_compute[280228]: 2025-12-05 09:50:34.712 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:50:35 localhost podman[281501]: 2025-12-05 09:50:35.183491787 +0000 UTC m=+0.070252910 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 04:50:35 localhost podman[281501]: 2025-12-05 09:50:35.198680828 +0000 UTC m=+0.085441951 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:50:35 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:37.318 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:37.320 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:37 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost nova_compute[280228]: 2025-12-05 09:50:38.241 280232 DEBUG nova.compute.manager [None req-697eea14-2e1f-4a37-822c-a0f5eed4ec2b 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 04:50:38 localhost nova_compute[280228]: 2025-12-05 09:50:38.248 280232 INFO nova.compute.manager [None req-697eea14-2e1f-4a37-822c-a0f5eed4ec2b 52d0a54dc45b4c4caaba721ba3202150 e6ca8a92050741d3a93772e6c1b0d704 - - default default] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Retrieving diagnostics#033[00m Dec 5 04:50:38 localhost nova_compute[280228]: 2025-12-05 09:50:38.294 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.462 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.462 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 1.1422727#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33470 [05/Dec/2025:09:50:37.318] listener listener/metadata 0/0/0/1144/1144 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.480 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.481 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-keys HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33484 [05/Dec/2025:09:50:38.479] listener listener/metadata 0/0/0/28/28 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.508 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 404 len: 297 time: 0.0267398#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.515 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.517 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.532 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33492 [05/Dec/2025:09:50:38.515] listener listener/metadata 0/0/0/18/18 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.534 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 0.0170152#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.536 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.537 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/ami-launch-index HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.556 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.557 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" status: 200 len: 136 time: 0.0194404#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33500 [05/Dec/2025:09:50:38.536] listener listener/metadata 0/0/0/20/20 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.560 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.561 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-type HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.575 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.575 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200 len: 143 time: 0.0146105#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33508 [05/Dec/2025:09:50:38.560] listener listener/metadata 0/0/0/15/15 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.578 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.580 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-ipv4 HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.595 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.596 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" status: 200 len: 149 time: 0.0159888#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33512 [05/Dec/2025:09:50:38.578] listener listener/metadata 0/0/0/17/17 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.603 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.604 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-ipv4 HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.620 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.620 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 150 time: 0.0166161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33528 [05/Dec/2025:09:50:38.602] listener listener/metadata 0/0/0/18/18 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.627 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.628 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/hostname HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.645 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.645 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/hostname HTTP/1.1" status: 200 len: 139 time: 0.0171733#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33538 [05/Dec/2025:09:50:38.627] listener listener/metadata 0/0/0/18/18 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.652 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.653 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-hostname HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.665 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33552 [05/Dec/2025:09:50:38.652] listener listener/metadata 0/0/0/13/13 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.665 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" status: 200 len: 139 time: 0.0120084#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.672 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.673 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/user-data HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33562 [05/Dec/2025:09:50:38.672] listener listener/metadata 0/0/0/13/13 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.685 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/user-data HTTP/1.1" status: 404 len: 297 time: 0.0124502#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.700 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.702 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.720 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33578 [05/Dec/2025:09:50:38.700] listener listener/metadata 0/0/0/20/20 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.721 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" status: 200 len: 155 time: 0.0191135#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.726 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.727 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.740 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33590 [05/Dec/2025:09:50:38.726] listener listener/metadata 0/0/0/14/14 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.740 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200 len: 138 time: 0.0135708#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.745 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.746 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.760 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.761 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" status: 200 len: 143 time: 0.0146685#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33596 [05/Dec/2025:09:50:38.745] listener listener/metadata 0/0/0/15/15 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.766 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.767 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.781 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33604 [05/Dec/2025:09:50:38.766] listener listener/metadata 0/0/0/15/15 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.782 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200 len: 143 time: 0.0144117#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.789 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.791 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-hostname HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.806 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.806 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" status: 200 len: 139 time: 0.0154898#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33608 [05/Dec/2025:09:50:38.788] listener listener/metadata 0/0/0/17/17 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.813 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.814 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 192.168.0.214#015 Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 86f5c13f-3cf8-4808-86c3-060f6b38ab5b __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.826 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 04:50:38 localhost haproxy-metadata-proxy-86f5c13f-3cf8-4808-86c3-060f6b38ab5b[281471]: 192.168.0.214:33620 [05/Dec/2025:09:50:38.812] listener listener/metadata 0/0/0/13/13 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Dec 5 04:50:38 localhost ovn_metadata_agent[158815]: 2025-12-05 09:50:38.826 158921 INFO eventlet.wsgi.server [-] 192.168.0.214, "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 139 time: 0.0127945#033[00m Dec 5 04:50:39 localhost nova_compute[280228]: 2025-12-05 09:50:39.757 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51098 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CA1D60000000001030307) Dec 5 04:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:50:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51099 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CA5C50000000001030307) Dec 5 04:50:42 localhost systemd[1]: tmp-crun.VKNNZW.mount: Deactivated successfully. Dec 5 04:50:42 localhost podman[281522]: 2025-12-05 09:50:42.235069881 +0000 UTC m=+0.111852631 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 04:50:42 localhost podman[281521]: 2025-12-05 09:50:42.199489803 +0000 UTC m=+0.084612065 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:50:42 localhost podman[281521]: 2025-12-05 09:50:42.294689968 +0000 UTC m=+0.179812190 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:50:42 localhost podman[281523]: 2025-12-05 09:50:42.297636838 +0000 UTC m=+0.172491150 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 04:50:42 localhost podman[281522]: 2025-12-05 09:50:42.318810849 +0000 UTC m=+0.195593579 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 5 04:50:42 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:50:42 localhost podman[281523]: 2025-12-05 09:50:42.335761552 +0000 UTC m=+0.210615874 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 04:50:42 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:50:42 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:50:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35997 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CA8450000000001030307) Dec 5 04:50:43 localhost nova_compute[280228]: 2025-12-05 09:50:43.333 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51100 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CADC50000000001030307) Dec 5 04:50:44 localhost nova_compute[280228]: 2025-12-05 09:50:44.760 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59520 DF PROTO=TCP SPT=48044 DPT=9102 SEQ=510076032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CB2450000000001030307) Dec 5 04:50:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51101 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CBD850000000001030307) Dec 5 04:50:48 localhost nova_compute[280228]: 2025-12-05 09:50:48.370 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:50:49 localhost systemd[1]: tmp-crun.5kQs3r.mount: Deactivated successfully. Dec 5 04:50:49 localhost podman[281582]: 2025-12-05 09:50:49.221369595 +0000 UTC m=+0.101584620 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 04:50:49 localhost snmpd[66746]: empty variable list in _query Dec 5 04:50:49 localhost snmpd[66746]: empty variable list in _query Dec 5 04:50:49 localhost podman[281582]: 2025-12-05 09:50:49.254600392 +0000 UTC m=+0.134815347 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 04:50:49 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:50:49 localhost nova_compute[280228]: 2025-12-05 09:50:49.762 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:49 localhost podman[239519]: time="2025-12-05T09:50:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:50:49 localhost podman[239519]: @ - - [05/Dec/2025:09:50:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147736 "" "Go-http-client/1.1" Dec 5 04:50:49 localhost ovn_controller[153000]: 2025-12-05T09:50:49Z|00071|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 5 04:50:49 localhost podman[239519]: @ - - [05/Dec/2025:09:50:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17228 "" "Go-http-client/1.1" Dec 5 04:50:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:50:51 localhost podman[281605]: 2025-12-05 09:50:51.196600378 +0000 UTC m=+0.084328747 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:50:51 localhost podman[281605]: 2025-12-05 09:50:51.209698055 +0000 UTC m=+0.097426434 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:50:51 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:50:53 localhost nova_compute[280228]: 2025-12-05 09:50:53.406 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:54 localhost nova_compute[280228]: 2025-12-05 09:50:54.765 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51102 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5CDE450000000001030307) Dec 5 04:50:57 localhost openstack_network_exporter[241668]: ERROR 09:50:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:50:57 localhost openstack_network_exporter[241668]: ERROR 09:50:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:50:57 localhost openstack_network_exporter[241668]: ERROR 09:50:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:50:57 localhost openstack_network_exporter[241668]: ERROR 09:50:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:50:57 localhost openstack_network_exporter[241668]: Dec 5 04:50:57 localhost openstack_network_exporter[241668]: ERROR 09:50:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:50:57 localhost openstack_network_exporter[241668]: Dec 5 04:50:58 localhost nova_compute[280228]: 2025-12-05 09:50:58.448 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:50:59 localhost nova_compute[280228]: 2025-12-05 09:50:59.768 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.186 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.187 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost podman[281628]: 2025-12-05 09:51:03.189738167 +0000 UTC m=+0.076631243 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41) Dec 5 04:51:03 localhost podman[281628]: 2025-12-05 09:51:03.207792475 +0000 UTC m=+0.094685531 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.208 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.209 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.209 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:51:03 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.489 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.490 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.491 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.491 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.495 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:51:03.898 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:51:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:51:03.899 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:51:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:51:03.899 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.905 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.923 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.924 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.925 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.925 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.926 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.926 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.926 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.927 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.927 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.928 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.952 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.952 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.953 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.953 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:51:03 localhost nova_compute[280228]: 2025-12-05 09:51:03.954 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.382 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.450 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.451 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.673 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.674 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12308MB free_disk=41.83699417114258GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.675 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.675 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.736 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.737 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.737 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.770 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:04 localhost nova_compute[280228]: 2025-12-05 09:51:04.790 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:51:05 localhost nova_compute[280228]: 2025-12-05 09:51:05.217 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:51:05 localhost nova_compute[280228]: 2025-12-05 09:51:05.224 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:51:05 localhost nova_compute[280228]: 2025-12-05 09:51:05.239 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:51:05 localhost nova_compute[280228]: 2025-12-05 09:51:05.259 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:51:05 localhost nova_compute[280228]: 2025-12-05 09:51:05.260 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:51:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:51:06 localhost podman[281693]: 2025-12-05 09:51:06.172793076 +0000 UTC m=+0.064734693 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true) Dec 5 04:51:06 localhost podman[281693]: 2025-12-05 09:51:06.212664415 +0000 UTC m=+0.104606002 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:51:06 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:51:08 localhost nova_compute[280228]: 2025-12-05 09:51:08.524 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:09 localhost nova_compute[280228]: 2025-12-05 09:51:09.795 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7585 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D17070000000001030307) Dec 5 04:51:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7586 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D1B060000000001030307) Dec 5 04:51:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51103 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D1E450000000001030307) Dec 5 04:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:51:13 localhost podman[281713]: 2025-12-05 09:51:13.312428301 +0000 UTC m=+0.191505344 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 5 04:51:13 localhost podman[281713]: 2025-12-05 09:51:13.345728858 +0000 UTC m=+0.224805881 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:51:13 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:51:13 localhost podman[281712]: 2025-12-05 09:51:13.368960428 +0000 UTC m=+0.247578077 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:51:13 localhost podman[281712]: 2025-12-05 09:51:13.377647124 +0000 UTC m=+0.256264733 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:51:13 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:51:13 localhost podman[281714]: 2025-12-05 09:51:13.458628149 +0000 UTC m=+0.329568934 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 5 04:51:13 localhost podman[281714]: 2025-12-05 09:51:13.470667396 +0000 UTC m=+0.341608201 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:51:13 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:51:13 localhost nova_compute[280228]: 2025-12-05 09:51:13.571 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7587 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D23050000000001030307) Dec 5 04:51:14 localhost nova_compute[280228]: 2025-12-05 09:51:14.842 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35998 DF PROTO=TCP SPT=41768 DPT=9102 SEQ=3325850458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D26460000000001030307) Dec 5 04:51:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7588 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D32C50000000001030307) Dec 5 04:51:18 localhost nova_compute[280228]: 2025-12-05 09:51:18.617 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:19 localhost podman[239519]: time="2025-12-05T09:51:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:51:19 localhost podman[239519]: @ - - [05/Dec/2025:09:51:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147736 "" "Go-http-client/1.1" Dec 5 04:51:19 localhost nova_compute[280228]: 2025-12-05 09:51:19.882 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:19 localhost podman[239519]: @ - - [05/Dec/2025:09:51:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17231 "" "Go-http-client/1.1" Dec 5 04:51:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:51:20 localhost podman[281771]: 2025-12-05 09:51:20.177094348 +0000 UTC m=+0.065327087 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:51:20 localhost podman[281771]: 2025-12-05 09:51:20.218892805 +0000 UTC m=+0.107125535 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:51:20 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:51:21 localhost podman[281882]: 2025-12-05 09:51:21.839553589 +0000 UTC m=+0.117109211 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:51:21 localhost podman[281882]: 2025-12-05 09:51:21.850185404 +0000 UTC m=+0.127741076 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:51:21 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:51:23 localhost nova_compute[280228]: 2025-12-05 09:51:23.649 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:24 localhost nova_compute[280228]: 2025-12-05 09:51:24.919 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7589 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D52460000000001030307) Dec 5 04:51:27 localhost openstack_network_exporter[241668]: ERROR 09:51:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:51:27 localhost openstack_network_exporter[241668]: ERROR 09:51:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:51:27 localhost openstack_network_exporter[241668]: ERROR 09:51:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:51:27 localhost openstack_network_exporter[241668]: ERROR 09:51:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:51:27 localhost openstack_network_exporter[241668]: Dec 5 04:51:27 localhost openstack_network_exporter[241668]: ERROR 09:51:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:51:27 localhost openstack_network_exporter[241668]: Dec 5 04:51:28 localhost nova_compute[280228]: 2025-12-05 09:51:28.683 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:29 localhost nova_compute[280228]: 2025-12-05 09:51:29.959 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:33 localhost nova_compute[280228]: 2025-12-05 09:51:33.690 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:51:34 localhost podman[281961]: 2025-12-05 09:51:34.191945878 +0000 UTC m=+0.080523693 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:51:34 localhost podman[281961]: 2025-12-05 09:51:34.208831944 +0000 UTC m=+0.097409789 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6) Dec 5 04:51:34 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:51:35 localhost nova_compute[280228]: 2025-12-05 09:51:35.010 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:51:37 localhost podman[281981]: 2025-12-05 09:51:37.195904828 +0000 UTC m=+0.084612598 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:51:37 localhost podman[281981]: 2025-12-05 09:51:37.229648239 +0000 UTC m=+0.118355949 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:51:37 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:51:38 localhost nova_compute[280228]: 2025-12-05 09:51:38.723 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:40 localhost nova_compute[280228]: 2025-12-05 09:51:40.043 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55669 DF PROTO=TCP SPT=36354 DPT=9102 SEQ=4011532916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D8C360000000001030307) Dec 5 04:51:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55670 DF PROTO=TCP SPT=36354 DPT=9102 SEQ=4011532916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D90450000000001030307) Dec 5 04:51:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7590 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D92450000000001030307) Dec 5 04:51:43 localhost nova_compute[280228]: 2025-12-05 09:51:43.764 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:51:44 localhost systemd[1]: tmp-crun.1RdW0t.mount: Deactivated successfully. Dec 5 04:51:44 localhost podman[282000]: 2025-12-05 09:51:44.20276259 +0000 UTC m=+0.087480736 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:51:44 localhost podman[282000]: 2025-12-05 09:51:44.208537036 +0000 UTC m=+0.093255172 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:51:44 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:51:44 localhost podman[282001]: 2025-12-05 09:51:44.246377782 +0000 UTC m=+0.125588739 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 04:51:44 localhost podman[282001]: 2025-12-05 09:51:44.256496562 +0000 UTC m=+0.135707519 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:51:44 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:51:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55671 DF PROTO=TCP SPT=36354 DPT=9102 SEQ=4011532916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D98450000000001030307) Dec 5 04:51:44 localhost podman[282007]: 2025-12-05 09:51:44.310318677 +0000 UTC m=+0.177522387 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:51:44 localhost podman[282007]: 2025-12-05 09:51:44.316840406 +0000 UTC m=+0.184044126 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:51:44 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:51:45 localhost nova_compute[280228]: 2025-12-05 09:51:45.079 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51104 DF PROTO=TCP SPT=34774 DPT=9102 SEQ=1315191375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5D9C460000000001030307) Dec 5 04:51:47 localhost sshd[282060]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:51:47 localhost systemd-logind[760]: New session 61 of user zuul. Dec 5 04:51:47 localhost systemd[1]: Started Session 61 of User zuul. Dec 5 04:51:47 localhost python3[282082]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:51:48 localhost subscription-manager[282083]: Unregistered machine with identity: dce74b25-fc83-49c9-a74a-3da4f3fcff46 Dec 5 04:51:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55672 DF PROTO=TCP SPT=36354 DPT=9102 SEQ=4011532916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5DA8050000000001030307) Dec 5 04:51:48 localhost nova_compute[280228]: 2025-12-05 09:51:48.767 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:49 localhost podman[239519]: time="2025-12-05T09:51:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:51:49 localhost podman[239519]: @ - - [05/Dec/2025:09:51:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147736 "" "Go-http-client/1.1" Dec 5 04:51:49 localhost podman[239519]: @ - - [05/Dec/2025:09:51:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17230 "" "Go-http-client/1.1" Dec 5 04:51:50 localhost nova_compute[280228]: 2025-12-05 09:51:50.115 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:51:51 localhost systemd[1]: tmp-crun.V5s87O.mount: Deactivated successfully. Dec 5 04:51:51 localhost podman[282085]: 2025-12-05 09:51:51.220444891 +0000 UTC m=+0.104763983 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller) Dec 5 04:51:51 localhost podman[282085]: 2025-12-05 09:51:51.252061217 +0000 UTC m=+0.136380339 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller) Dec 5 04:51:51 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:51:52 localhost podman[282110]: 2025-12-05 09:51:52.18727075 +0000 UTC m=+0.075341934 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:51:52 localhost podman[282110]: 2025-12-05 09:51:52.222655711 +0000 UTC m=+0.110726895 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:51:52 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:51:53 localhost nova_compute[280228]: 2025-12-05 09:51:53.769 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:55 localhost nova_compute[280228]: 2025-12-05 09:51:55.138 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:51:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55673 DF PROTO=TCP SPT=36354 DPT=9102 SEQ=4011532916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5DC8450000000001030307) Dec 5 04:51:57 localhost openstack_network_exporter[241668]: ERROR 09:51:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:51:57 localhost openstack_network_exporter[241668]: ERROR 09:51:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:51:57 localhost openstack_network_exporter[241668]: ERROR 09:51:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:51:57 localhost openstack_network_exporter[241668]: ERROR 09:51:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:51:57 localhost openstack_network_exporter[241668]: Dec 5 04:51:57 localhost openstack_network_exporter[241668]: ERROR 09:51:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:51:57 localhost openstack_network_exporter[241668]: Dec 5 04:51:58 localhost nova_compute[280228]: 2025-12-05 09:51:58.773 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:00 localhost nova_compute[280228]: 2025-12-05 09:52:00.185 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:03 localhost nova_compute[280228]: 2025-12-05 09:52:03.775 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:52:03.899 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:52:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:52:03.900 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:52:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:52:03.901 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:52:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.229 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:05 localhost podman[282134]: 2025-12-05 09:52:05.234738332 +0000 UTC m=+0.125037682 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public) Dec 5 04:52:05 localhost podman[282134]: 2025-12-05 09:52:05.246558924 +0000 UTC m=+0.136858244 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64) Dec 5 04:52:05 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.262 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.262 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.262 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.263 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:52:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4786 writes, 21K keys, 4786 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4786 writes, 590 syncs, 8.11 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 35 writes, 93 keys, 35 commit groups, 1.0 writes per commit group, ingest: 0.20 MB, 0.00 MB/s#012Interval WAL: 35 writes, 17 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:52:05 localhost systemd-journald[47252]: Field hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 5 04:52:05 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 04:52:05 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:52:05 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.517 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.517 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.518 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.518 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.906 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.926 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.927 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.928 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.928 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.929 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.929 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.929 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.930 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.930 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.931 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.952 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.953 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.953 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.953 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:52:05 localhost nova_compute[280228]: 2025-12-05 09:52:05.954 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.489 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.547 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.548 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.740 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.742 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12305MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.742 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.742 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.797 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.797 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.798 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:52:06 localhost nova_compute[280228]: 2025-12-05 09:52:06.829 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:52:07 localhost nova_compute[280228]: 2025-12-05 09:52:07.333 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:52:07 localhost nova_compute[280228]: 2025-12-05 09:52:07.338 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:52:07 localhost nova_compute[280228]: 2025-12-05 09:52:07.355 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:52:07 localhost nova_compute[280228]: 2025-12-05 09:52:07.356 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:52:07 localhost nova_compute[280228]: 2025-12-05 09:52:07.356 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:52:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:52:08 localhost podman[282198]: 2025-12-05 09:52:08.16922447 +0000 UTC m=+0.057215540 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:52:08 localhost podman[282198]: 2025-12-05 09:52:08.209672806 +0000 UTC m=+0.097663816 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:52:08 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:52:08 localhost nova_compute[280228]: 2025-12-05 09:52:08.778 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:52:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 5944 writes, 25K keys, 5944 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5944 writes, 870 syncs, 6.83 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 101 writes, 339 keys, 101 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 101 writes, 38 syncs, 2.66 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 04:52:10 localhost nova_compute[280228]: 2025-12-05 09:52:10.271 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62407 DF PROTO=TCP SPT=53780 DPT=9102 SEQ=911450846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5E01670000000001030307) Dec 5 04:52:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62408 DF PROTO=TCP SPT=53780 DPT=9102 SEQ=911450846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5E05850000000001030307) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.947 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.948 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.948 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.977 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55674 DF PROTO=TCP SPT=36354 DPT=9102 SEQ=4011532916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5E08450000000001030307) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.978 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a7d29885-a638-4c47-b8f2-ad8b7e4b6848', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:12.948932', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12eaac2c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': 'dae80eeefef1256bdd73e604fc0e8ad8cc099dd7853f3690a32df396eecaa3a0'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:12.948932', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12eac1ee-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': '40711c17f9a7d1b5cc7042d2d5f1825ca49d9fb14549cc6ac6bd48cd28b73484'}]}, 'timestamp': '2025-12-05 09:52:12.978884', '_unique_id': 'f297fef937574598b41590a9fa13189d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.981 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.982 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.986 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd23b1f1-8ca2-404e-8cff-ed9eb4d2c31c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:12.982150', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12ebf03c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': 'e403c751a90620d5fe088846d621fc8d75fd0778c910fbf15ddf6df3ef366c64'}]}, 'timestamp': '2025-12-05 09:52:12.986618', '_unique_id': 'ca584fc69e104f24b362528dda1ab981'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.987 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:12.988 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.001 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2bc65b02-2075-48ff-b509-b2fb968f1a5f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:12.988910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12ee247e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.163210986, 'message_signature': 'f899326789eca8486fef1d3c6f51abbe632fbfd98912006b85a2463671460d10'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:12.988910', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12ee368a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.163210986, 'message_signature': '1fc421a4efb99e02fa53de2a5cab86ed269118c1712e583ca176bb5308394dc5'}]}, 'timestamp': '2025-12-05 09:52:13.001490', '_unique_id': '177893a4021c427e8c344280a42ebf77'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.003 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.003 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.004 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd2b246f-ffaa-4e3e-ba2f-409c741e2ed2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.003807', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12eea28c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': '032578347c3c65780635a662ad82fb4d1259c1911fa1ef251c7ba9622c702932'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.003807', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12eeb42a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': 'e085b719a06feda9ea36d59909fce832a8bb3960940f4734e90a1117fa4ae90a'}]}, 'timestamp': '2025-12-05 09:52:13.004703', '_unique_id': 'bd80c8312ea24514971a0302cbbf7ca5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.006 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.007 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fcac4248-cca9-4170-90d2-4c0717222fbc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.007065', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12ef23e2-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': '6d57b1d395673212644f1448ce451f6b15f3b4c9ddc2d9f638601db46040edb9'}]}, 'timestamp': '2025-12-05 09:52:13.007594', '_unique_id': '6dbbfa83d0424b5191d8bb8c73210c81'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.009 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.010 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.010 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51d27abc-eee5-42f2-afd3-0e9de9c343a0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.009972', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12ef9732-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': '472430795ca7e3d59bb5f58ba9c74f8e7725ea1ff6bd716859aad83961a24f37'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.009972', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12efa81c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': 'fada2a7566fbbc587c0b2a227f9dad75290bd9de7794e15d454a5fb75c2a099a'}]}, 'timestamp': '2025-12-05 09:52:13.010949', '_unique_id': 'e464580df8ea4b91a8c799231f10f12c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.011 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.013 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cabeccb4-eebd-44d6-877d-cc8d4abb2370', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.013353', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12f0186a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': '0cfa3c6dea5bf24a6f125abaa3028974ba0dbd58f647f772ba3582e68c469a16'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.013353', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12f029fe-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': '4779bca082ef102d2b61d36c47a112139349783f4e24aa3d99bb51ae59e5fd2e'}]}, 'timestamp': '2025-12-05 09:52:13.014348', '_unique_id': '8cf83fdbdae34711ada7bf1182e1b414'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.016 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.016 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c76fa6c7-9c3c-4fd9-aac0-06ef020338c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.016667', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f0998e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': '2f96ff78ff3dd87779b6dd663e9c17c4dcb56c3283eb7211525a4b781ab59298'}]}, 'timestamp': '2025-12-05 09:52:13.017158', '_unique_id': 'f3419e34dce242b3b76db7f7cda69625'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.019 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.019 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ce016759-efd6-4423-b98a-333e9413d7e3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.019420', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f1040a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': '7c2d444348a11bdaf9ad9993eb24ca87cd8490c1160cd9a613e1bf8eb2bd7755'}]}, 'timestamp': '2025-12-05 09:52:13.019880', '_unique_id': '5011c0d34e2745f293efe6a5e91dd562'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.021 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4f2d6cb-0bc8-4441-86cb-4d77878e9471', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.021999', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12f16a9e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.163210986, 'message_signature': '2aeebfbb04a381780812723884acb10d8ac16e63d52cbba6984d3aed82def28e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.021999', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12f17b10-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.163210986, 'message_signature': '62f0a28dd044f2b7a45f07ce8ad3a4abbfeb571a88b879a0738ab517d2b78337'}]}, 'timestamp': '2025-12-05 09:52:13.022901', '_unique_id': '9666ce5bef8f4b349f0faf6cc846ab98'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.024 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3a7e05bf-9e9d-415e-9faa-cb7a18b0621f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.025071', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12f1e172-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': 'dd8b767885ff850f70d82903667f401ebb73f669a37afcf4173ac2b8cd23acf6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.025071', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12f1f194-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': 'a9a159eff5725a35e9a3001732ee5605e90ac76f07effd98aade636469b84dbe'}]}, 'timestamp': '2025-12-05 09:52:13.025930', '_unique_id': 'e6ac7f68ca2f4a1d8b9efe2b7842b28b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.026 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.028 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.028 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9b7a50a4-ec66-499f-b603-2ab87a3edb5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.028149', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f25a3a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': '685efccd4e6ee0a25eba6afc2dc1e6d7a2614ef08b5cceb2c547a6cc3d9008c4'}]}, 'timestamp': '2025-12-05 09:52:13.028639', '_unique_id': '52eef167bea7444785e32e6abe3fdabb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fc54a5b5-d62a-47bb-bded-04ea44533a74', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.030830', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f2c18c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': '7d862ef1c5eb78b1b2c23192b172f3aa0f7ab2ba5905fce1a01d8ccac3c9065e'}]}, 'timestamp': '2025-12-05 09:52:13.031317', '_unique_id': '0aab97375d1146619866f88a3f64f5a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74197d67-e24e-4513-9104-b31d87fa8b99', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.033428', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f3271c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': 'c1fb45c8009afcb165862e4ff2c67b56f51f5cb1550bc20deb118f97f8fd196a'}]}, 'timestamp': '2025-12-05 09:52:13.033885', '_unique_id': '4bbbe1a2fdc64be4b7f358b039b9bac1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.036 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7aef0373-bf0d-49c0-989b-71bf3808990b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.036464', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12f39d8c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': 'b932a2031e83724d6b0c03df00a3df210bd1dddfcb051b1945a4f4e2140f6918'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.036464', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12f3adea-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.123235984, 'message_signature': '6c4cec01da26a31ef7d4a88d5ee7aa5948d02585bbe89c33c0126d2a5ff20a5c'}]}, 'timestamp': '2025-12-05 09:52:13.037345', '_unique_id': '1bfbb9d952f046419834dc2f90707a0b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.039 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e6a2d70-8292-4769-9fa2-b7cce5b28ba3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:52:13.039097', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '12f4011e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.163210986, 'message_signature': '8f93ffeac941ff44d6c7325641a8657e95bbfe8d54c02f46587446abedd777e7'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:52:13.039097', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '12f40c2c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.163210986, 'message_signature': '6b1c2e8bbe8f8f03cb1bb502b2d649ba4d79983fcbc41b3855ef4b8fc036063c'}]}, 'timestamp': '2025-12-05 09:52:13.039645', '_unique_id': 'e491f9da457a4818a424cd922fa2546f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.040 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'abcdb0dd-7d94-427d-b023-20d0ed55bb59', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.041019', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f44c64-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': '522a30f12a899d656e1a36e3c3f79d8592613c83a6c2761271b0cc2259ae1532'}]}, 'timestamp': '2025-12-05 09:52:13.041330', '_unique_id': '64bc7a24627b42f0a0fa95ad97ae3357'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.042 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.042 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d326033-fbbb-404b-9706-54e32442d639', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.042689', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f48d8c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': 'b7032ca278ecb3a1f9b293cce43bb4286eb585a0f3b0c63f773754ea3753e4bb'}]}, 'timestamp': '2025-12-05 09:52:13.042976', '_unique_id': '34a6809d84594c85abd19210179c7a4b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.044 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '91f991a1-262a-4a69-aeeb-3f446bd0f4a5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:52:13.044301', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '12f4cc98-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.15646289, 'message_signature': 'd5082bb5ab255d2fa918e2bc1f46337d7493be3ca5c60defa182df3522268ea1'}]}, 'timestamp': '2025-12-05 09:52:13.044589', '_unique_id': 'b2424ad2fc584e58914c003b1cdbbf78'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd472bbbb-e8c2-4f0a-b3c7-d7b35450c3df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:52:13.045881', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '12f85a52-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.241608432, 'message_signature': '43eef1d0a364f57a23c9ae3102460025c55b5eae029b74de7e0caafb85a361d4'}]}, 'timestamp': '2025-12-05 09:52:13.067970', '_unique_id': 'df9d78566c534c75a4df6d32ace1ea3f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.070 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 11140000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ff4b444-2999-4caf-8c6d-f2f34944ecfb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 11140000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:52:13.070704', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '12f8d7fc-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11448.241608432, 'message_signature': '586df8f33f533ff45c3d3ca365dac810d1d3aa6e7a5a8e4c81630edfc20de75a'}]}, 'timestamp': '2025-12-05 09:52:13.071171', '_unique_id': '0f8c6021f676428c95cb1315e8198cfc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:52:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:52:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 04:52:13 localhost nova_compute[280228]: 2025-12-05 09:52:13.781 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:52:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62409 DF PROTO=TCP SPT=53780 DPT=9102 SEQ=911450846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5E0D850000000001030307) Dec 5 04:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:52:14 localhost systemd[1]: tmp-crun.JcCd4v.mount: Deactivated successfully. Dec 5 04:52:14 localhost podman[282218]: 2025-12-05 09:52:14.410567721 +0000 UTC m=+0.093434387 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:52:14 localhost podman[282218]: 2025-12-05 09:52:14.490614047 +0000 UTC m=+0.173480913 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 04:52:14 localhost podman[282217]: 2025-12-05 09:52:14.504739498 +0000 UTC m=+0.189651097 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:52:14 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:52:14 localhost podman[282240]: 2025-12-05 09:52:14.473115152 +0000 UTC m=+0.111069175 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 5 04:52:14 localhost podman[282217]: 2025-12-05 09:52:14.539299145 +0000 UTC m=+0.224210723 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:52:14 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:52:14 localhost podman[282240]: 2025-12-05 09:52:14.555995156 +0000 UTC m=+0.193949139 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 04:52:14 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:52:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7591 DF PROTO=TCP SPT=41388 DPT=9102 SEQ=2943583921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5E10460000000001030307) Dec 5 04:52:15 localhost nova_compute[280228]: 2025-12-05 09:52:15.274 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:63:a4:4b MACDST=fa:16:3e:c7:7c:3e MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.106 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62410 DF PROTO=TCP SPT=53780 DPT=9102 SEQ=911450846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEC5E1D450000000001030307) Dec 5 04:52:18 localhost nova_compute[280228]: 2025-12-05 09:52:18.784 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:19 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 5 04:52:19 localhost podman[239519]: time="2025-12-05T09:52:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:52:19 localhost podman[239519]: @ - - [05/Dec/2025:09:52:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147736 "" "Go-http-client/1.1" Dec 5 04:52:19 localhost podman[239519]: @ - - [05/Dec/2025:09:52:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17235 "" "Go-http-client/1.1" Dec 5 04:52:20 localhost nova_compute[280228]: 2025-12-05 09:52:20.304 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:52:22 localhost podman[282277]: 2025-12-05 09:52:22.194678208 +0000 UTC m=+0.081590585 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller) Dec 5 04:52:22 localhost podman[282277]: 2025-12-05 09:52:22.2336766 +0000 UTC m=+0.120588927 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:52:22 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:52:22 localhost podman[282305]: 2025-12-05 09:52:22.353207753 +0000 UTC m=+0.068579436 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:52:22 localhost podman[282305]: 2025-12-05 09:52:22.38419926 +0000 UTC m=+0.099570963 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:52:22 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:52:22 localhost sshd[282342]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:52:22 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 5 04:52:22 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 5 04:52:22 localhost systemd-logind[760]: New session 62 of user tripleo-admin. Dec 5 04:52:22 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 5 04:52:22 localhost systemd[1]: Starting User Manager for UID 1003... Dec 5 04:52:22 localhost systemd[282370]: Queued start job for default target Main User Target. Dec 5 04:52:22 localhost systemd[282370]: Created slice User Application Slice. Dec 5 04:52:22 localhost systemd[282370]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 04:52:22 localhost systemd[282370]: Started Daily Cleanup of User's Temporary Directories. Dec 5 04:52:22 localhost systemd[282370]: Reached target Paths. Dec 5 04:52:22 localhost systemd[282370]: Reached target Timers. Dec 5 04:52:22 localhost systemd[282370]: Starting D-Bus User Message Bus Socket... Dec 5 04:52:22 localhost systemd[282370]: Starting Create User's Volatile Files and Directories... Dec 5 04:52:22 localhost systemd[282370]: Listening on D-Bus User Message Bus Socket. Dec 5 04:52:22 localhost systemd[282370]: Reached target Sockets. Dec 5 04:52:22 localhost systemd[282370]: Finished Create User's Volatile Files and Directories. Dec 5 04:52:22 localhost systemd[282370]: Reached target Basic System. Dec 5 04:52:22 localhost systemd[282370]: Reached target Main User Target. Dec 5 04:52:22 localhost systemd[282370]: Startup finished in 132ms. Dec 5 04:52:22 localhost systemd[1]: Started User Manager for UID 1003. Dec 5 04:52:22 localhost systemd[1]: Started Session 62 of User tripleo-admin. Dec 5 04:52:23 localhost python3[282556]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:52:23 localhost nova_compute[280228]: 2025-12-05 09:52:23.788 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:24 localhost python3[282718]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 5 04:52:25 localhost nova_compute[280228]: 2025-12-05 09:52:25.344 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:25 localhost systemd[1]: Stopping Netfilter Tables... Dec 5 04:52:25 localhost systemd[1]: nftables.service: Deactivated successfully. Dec 5 04:52:25 localhost systemd[1]: Stopped Netfilter Tables. Dec 5 04:52:25 localhost systemd[1]: Starting Netfilter Tables... Dec 5 04:52:25 localhost systemd[1]: Finished Netfilter Tables. Dec 5 04:52:27 localhost openstack_network_exporter[241668]: ERROR 09:52:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:52:27 localhost openstack_network_exporter[241668]: ERROR 09:52:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:52:27 localhost openstack_network_exporter[241668]: ERROR 09:52:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:52:27 localhost openstack_network_exporter[241668]: ERROR 09:52:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:52:27 localhost openstack_network_exporter[241668]: Dec 5 04:52:27 localhost openstack_network_exporter[241668]: ERROR 09:52:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:52:27 localhost openstack_network_exporter[241668]: Dec 5 04:52:28 localhost nova_compute[280228]: 2025-12-05 09:52:28.793 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:30 localhost nova_compute[280228]: 2025-12-05 09:52:30.346 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:33 localhost nova_compute[280228]: 2025-12-05 09:52:33.796 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:35 localhost nova_compute[280228]: 2025-12-05 09:52:35.377 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:52:35 localhost podman[282779]: 2025-12-05 09:52:35.657136393 +0000 UTC m=+0.082936335 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64) Dec 5 04:52:35 localhost podman[282779]: 2025-12-05 09:52:35.700829539 +0000 UTC m=+0.126629381 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc.) Dec 5 04:52:35 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:52:38 localhost nova_compute[280228]: 2025-12-05 09:52:38.800 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:52:39 localhost podman[282835]: 2025-12-05 09:52:39.199456829 +0000 UTC m=+0.082431851 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd) Dec 5 04:52:39 localhost podman[282835]: 2025-12-05 09:52:39.235675326 +0000 UTC m=+0.118650318 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd) Dec 5 04:52:39 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:52:40 localhost nova_compute[280228]: 2025-12-05 09:52:40.411 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:43 localhost nova_compute[280228]: 2025-12-05 09:52:43.804 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:52:45 localhost podman[282890]: 2025-12-05 09:52:45.198988312 +0000 UTC m=+0.082616165 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:52:45 localhost podman[282890]: 2025-12-05 09:52:45.211762163 +0000 UTC m=+0.095390056 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:52:45 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:52:45 localhost podman[282892]: 2025-12-05 09:52:45.266443704 +0000 UTC m=+0.143487186 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute) Dec 5 04:52:45 localhost podman[282892]: 2025-12-05 09:52:45.272718326 +0000 UTC m=+0.149761808 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 5 04:52:45 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:52:45 localhost systemd[1]: tmp-crun.CmS1OV.mount: Deactivated successfully. Dec 5 04:52:45 localhost podman[282891]: 2025-12-05 09:52:45.348511473 +0000 UTC m=+0.229366441 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 04:52:45 localhost podman[282891]: 2025-12-05 09:52:45.381673906 +0000 UTC m=+0.262528894 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125) Dec 5 04:52:45 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:52:45 localhost nova_compute[280228]: 2025-12-05 09:52:45.470 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:47 localhost podman[283027]: Dec 5 04:52:47 localhost podman[283027]: 2025-12-05 09:52:47.113637661 +0000 UTC m=+0.070661562 container create e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gates, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:52:47 localhost systemd[1]: Started libpod-conmon-e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c.scope. Dec 5 04:52:47 localhost systemd[1]: Started libcrun container. Dec 5 04:52:47 localhost podman[283027]: 2025-12-05 09:52:47.077844106 +0000 UTC m=+0.034868057 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:52:47 localhost podman[283027]: 2025-12-05 09:52:47.193108429 +0000 UTC m=+0.150132330 container init e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gates, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, name=rhceph, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 5 04:52:47 localhost reverent_gates[283042]: 167 167 Dec 5 04:52:47 localhost systemd[1]: libpod-e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c.scope: Deactivated successfully. Dec 5 04:52:47 localhost podman[283027]: 2025-12-05 09:52:47.220040563 +0000 UTC m=+0.177064464 container start e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gates, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True) Dec 5 04:52:47 localhost podman[283027]: 2025-12-05 09:52:47.220671442 +0000 UTC m=+0.177695383 container attach e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gates, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=1763362218, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:52:47 localhost podman[283027]: 2025-12-05 09:52:47.224182749 +0000 UTC m=+0.181206710 container died e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gates, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, vcs-type=git) Dec 5 04:52:47 localhost podman[283047]: 2025-12-05 09:52:47.330815318 +0000 UTC m=+0.101213385 container remove e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gates, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:52:47 localhost systemd[1]: libpod-conmon-e5d4574beca576f21140ad7b6ca7189b40a48cecd16f06b224dd2c51a1606c9c.scope: Deactivated successfully. Dec 5 04:52:47 localhost systemd[1]: Reloading. Dec 5 04:52:47 localhost systemd-rc-local-generator[283091]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:52:47 localhost systemd-sysv-generator[283094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:47 localhost systemd[1]: var-lib-containers-storage-overlay-512d27f78870aabdd3b5d66ba567b6de984d10de815e20fa06b6ef09d16f9bf3-merged.mount: Deactivated successfully. Dec 5 04:52:47 localhost systemd[1]: tmp-crun.suURLn.mount: Deactivated successfully. Dec 5 04:52:47 localhost systemd[1]: Reloading. Dec 5 04:52:47 localhost systemd-sysv-generator[283133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:52:47 localhost systemd-rc-local-generator[283129]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:52:48 localhost systemd[1]: Starting Ceph mds.mds.np0005546419.rweotn for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 04:52:48 localhost systemd[1]: session-61.scope: Deactivated successfully. Dec 5 04:52:48 localhost systemd-logind[760]: Session 61 logged out. Waiting for processes to exit. Dec 5 04:52:48 localhost systemd-logind[760]: Removed session 61. Dec 5 04:52:48 localhost podman[283197]: Dec 5 04:52:48 localhost podman[283197]: 2025-12-05 09:52:48.59381199 +0000 UTC m=+0.071287940 container create c015c4610c3f2055624efa509784067433be4bfda9dcb98f792ad98e6b135e3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546419-rweotn, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:52:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e486f88bdfc8f6fc148e237dd653c44d4363819c469027a8d4ef9e882e15e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:52:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e486f88bdfc8f6fc148e237dd653c44d4363819c469027a8d4ef9e882e15e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 04:52:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e486f88bdfc8f6fc148e237dd653c44d4363819c469027a8d4ef9e882e15e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 04:52:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e486f88bdfc8f6fc148e237dd653c44d4363819c469027a8d4ef9e882e15e9/merged/var/lib/ceph/mds/ceph-mds.np0005546419.rweotn supports timestamps until 2038 (0x7fffffff) Dec 5 04:52:48 localhost podman[283197]: 2025-12-05 09:52:48.567888978 +0000 UTC m=+0.045364908 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:52:48 localhost podman[283197]: 2025-12-05 09:52:48.6700592 +0000 UTC m=+0.147535130 container init c015c4610c3f2055624efa509784067433be4bfda9dcb98f792ad98e6b135e3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546419-rweotn, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, release=1763362218) Dec 5 04:52:48 localhost podman[283197]: 2025-12-05 09:52:48.676477137 +0000 UTC m=+0.153953057 container start c015c4610c3f2055624efa509784067433be4bfda9dcb98f792ad98e6b135e3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546419-rweotn, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, ceph=True, io.buildah.version=1.41.4, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:52:48 localhost bash[283197]: c015c4610c3f2055624efa509784067433be4bfda9dcb98f792ad98e6b135e3f Dec 5 04:52:48 localhost systemd[1]: Started Ceph mds.mds.np0005546419.rweotn for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 04:52:48 localhost ceph-mds[283215]: set uid:gid to 167:167 (ceph:ceph) Dec 5 04:52:48 localhost ceph-mds[283215]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Dec 5 04:52:48 localhost ceph-mds[283215]: main not setting numa affinity Dec 5 04:52:48 localhost ceph-mds[283215]: pidfile_write: ignore empty --pid-file Dec 5 04:52:48 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mds-mds-np0005546419-rweotn[283211]: starting mds.mds.np0005546419.rweotn at Dec 5 04:52:48 localhost ceph-mds[283215]: mds.mds.np0005546419.rweotn Updating MDS map to version 8 from mon.0 Dec 5 04:52:48 localhost nova_compute[280228]: 2025-12-05 09:52:48.807 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:48 localhost ceph-mds[283215]: mds.mds.np0005546419.rweotn Updating MDS map to version 9 from mon.0 Dec 5 04:52:48 localhost ceph-mds[283215]: mds.mds.np0005546419.rweotn Monitors have assigned me to become a standby. Dec 5 04:52:49 localhost podman[239519]: time="2025-12-05T09:52:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:52:49 localhost podman[239519]: @ - - [05/Dec/2025:09:52:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149884 "" "Go-http-client/1.1" Dec 5 04:52:49 localhost podman[239519]: @ - - [05/Dec/2025:09:52:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17712 "" "Go-http-client/1.1" Dec 5 04:52:50 localhost podman[283360]: 2025-12-05 09:52:50.23043572 +0000 UTC m=+0.082644156 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Dec 5 04:52:50 localhost podman[283360]: 2025-12-05 09:52:50.330463568 +0000 UTC m=+0.182672044 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, com.redhat.component=rhceph-container, release=1763362218, architecture=x86_64, build-date=2025-11-26T19:44:28Z, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 5 04:52:50 localhost nova_compute[280228]: 2025-12-05 09:52:50.513 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:52:53 localhost podman[283483]: 2025-12-05 09:52:53.19901805 +0000 UTC m=+0.079116188 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:52:53 localhost systemd[1]: tmp-crun.EzqaGE.mount: Deactivated successfully. Dec 5 04:52:53 localhost podman[283482]: 2025-12-05 09:52:53.28503926 +0000 UTC m=+0.164740846 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:52:53 localhost podman[283483]: 2025-12-05 09:52:53.304210566 +0000 UTC m=+0.184308694 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:52:53 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:52:53 localhost podman[283482]: 2025-12-05 09:52:53.319882314 +0000 UTC m=+0.199583900 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 04:52:53 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:52:53 localhost nova_compute[280228]: 2025-12-05 09:52:53.810 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:55 localhost nova_compute[280228]: 2025-12-05 09:52:55.636 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:52:57 localhost openstack_network_exporter[241668]: ERROR 09:52:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:52:57 localhost openstack_network_exporter[241668]: ERROR 09:52:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:52:57 localhost openstack_network_exporter[241668]: ERROR 09:52:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:52:57 localhost openstack_network_exporter[241668]: ERROR 09:52:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:52:57 localhost openstack_network_exporter[241668]: Dec 5 04:52:57 localhost openstack_network_exporter[241668]: ERROR 09:52:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:52:57 localhost openstack_network_exporter[241668]: Dec 5 04:52:58 localhost nova_compute[280228]: 2025-12-05 09:52:58.813 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:00 localhost nova_compute[280228]: 2025-12-05 09:53:00.677 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:03 localhost nova_compute[280228]: 2025-12-05 09:53:03.818 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:53:03.900 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:53:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:53:03.902 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:53:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:53:03.903 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:53:05 localhost nova_compute[280228]: 2025-12-05 09:53:05.710 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:53:06 localhost podman[283532]: 2025-12-05 09:53:06.20745391 +0000 UTC m=+0.087914219 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Dec 5 04:53:06 localhost podman[283532]: 2025-12-05 09:53:06.223038275 +0000 UTC m=+0.103498574 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal) Dec 5 04:53:06 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:53:06 localhost nova_compute[280228]: 2025-12-05 09:53:06.596 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:06 localhost nova_compute[280228]: 2025-12-05 09:53:06.597 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:06 localhost nova_compute[280228]: 2025-12-05 09:53:06.621 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:06 localhost nova_compute[280228]: 2025-12-05 09:53:06.621 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:53:06 localhost nova_compute[280228]: 2025-12-05 09:53:06.622 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:53:07 localhost nova_compute[280228]: 2025-12-05 09:53:07.497 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:53:07 localhost nova_compute[280228]: 2025-12-05 09:53:07.498 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:53:07 localhost nova_compute[280228]: 2025-12-05 09:53:07.498 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:53:07 localhost nova_compute[280228]: 2025-12-05 09:53:07.499 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.704 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.719 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.719 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.720 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.721 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.721 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.721 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.722 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.722 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.722 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.723 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.741 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.741 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.742 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.742 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.743 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:53:08 localhost nova_compute[280228]: 2025-12-05 09:53:08.821 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.292 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.354 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.354 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.584 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.586 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12274MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.587 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.587 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.662 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.663 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.663 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:53:09 localhost nova_compute[280228]: 2025-12-05 09:53:09.700 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:53:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:53:10 localhost nova_compute[280228]: 2025-12-05 09:53:10.105 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:53:10 localhost nova_compute[280228]: 2025-12-05 09:53:10.113 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:53:10 localhost nova_compute[280228]: 2025-12-05 09:53:10.155 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:53:10 localhost nova_compute[280228]: 2025-12-05 09:53:10.158 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:53:10 localhost nova_compute[280228]: 2025-12-05 09:53:10.158 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:53:10 localhost podman[283596]: 2025-12-05 09:53:10.195735424 +0000 UTC m=+0.079078588 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:53:10 localhost podman[283596]: 2025-12-05 09:53:10.206696209 +0000 UTC m=+0.090039363 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:53:10 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:53:10 localhost nova_compute[280228]: 2025-12-05 09:53:10.765 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:13 localhost nova_compute[280228]: 2025-12-05 09:53:13.824 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:15 localhost nova_compute[280228]: 2025-12-05 09:53:15.802 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:53:16 localhost podman[283614]: 2025-12-05 09:53:16.231567519 +0000 UTC m=+0.110905201 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:53:16 localhost podman[283614]: 2025-12-05 09:53:16.238991715 +0000 UTC m=+0.118329297 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:53:16 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:53:16 localhost podman[283616]: 2025-12-05 09:53:16.332876925 +0000 UTC m=+0.203994846 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 5 04:53:16 localhost podman[283616]: 2025-12-05 09:53:16.340816028 +0000 UTC m=+0.211933929 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Dec 5 04:53:16 localhost podman[283615]: 2025-12-05 09:53:16.370071472 +0000 UTC m=+0.243017639 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:53:16 localhost podman[283615]: 2025-12-05 09:53:16.377495299 +0000 UTC m=+0.250441456 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:53:16 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:53:16 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:53:17 localhost podman[283749]: Dec 5 04:53:17 localhost podman[283749]: 2025-12-05 09:53:17.023876705 +0000 UTC m=+0.079645426 container create 768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_mayer, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.41.4, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7) Dec 5 04:53:17 localhost systemd[1]: Started libpod-conmon-768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3.scope. Dec 5 04:53:17 localhost systemd[1]: Started libcrun container. Dec 5 04:53:17 localhost podman[283749]: 2025-12-05 09:53:16.991971439 +0000 UTC m=+0.047740200 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:53:17 localhost podman[283749]: 2025-12-05 09:53:17.100952 +0000 UTC m=+0.156720771 container init 768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_mayer, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main) Dec 5 04:53:17 localhost podman[283749]: 2025-12-05 09:53:17.111576905 +0000 UTC m=+0.167345626 container start 768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_mayer, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, release=1763362218, ceph=True, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, build-date=2025-11-26T19:44:28Z, RELEASE=main) Dec 5 04:53:17 localhost podman[283749]: 2025-12-05 09:53:17.111819002 +0000 UTC m=+0.167587733 container attach 768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_mayer, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=) Dec 5 04:53:17 localhost sweet_mayer[283764]: 167 167 Dec 5 04:53:17 localhost systemd[1]: libpod-768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3.scope: Deactivated successfully. Dec 5 04:53:17 localhost podman[283749]: 2025-12-05 09:53:17.117681301 +0000 UTC m=+0.173450072 container died 768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_mayer, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:53:17 localhost systemd[1]: var-lib-containers-storage-overlay-944dddbd8e27ac50ae92d894048ceaaae9cd84c4c01a7093d01ab6f329554cce-merged.mount: Deactivated successfully. Dec 5 04:53:17 localhost podman[283769]: 2025-12-05 09:53:17.229401026 +0000 UTC m=+0.101331218 container remove 768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_mayer, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:53:17 localhost systemd[1]: libpod-conmon-768e3d721ecce2f6964dc70618f9e9c5209d9560cbaf035103dbf680ed65c4b3.scope: Deactivated successfully. Dec 5 04:53:17 localhost podman[283791]: Dec 5 04:53:17 localhost podman[283791]: 2025-12-05 09:53:17.464985746 +0000 UTC m=+0.064372209 container create e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_feynman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, name=rhceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:53:17 localhost podman[283791]: 2025-12-05 09:53:17.436031631 +0000 UTC m=+0.035418094 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:53:17 localhost systemd[1]: Started libpod-conmon-e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73.scope. Dec 5 04:53:17 localhost systemd[1]: Started libcrun container. Dec 5 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041760d2ad35dfd9d83b01ea19caeec7f32f0c8cb52819f9193ab9da497bcdba/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041760d2ad35dfd9d83b01ea19caeec7f32f0c8cb52819f9193ab9da497bcdba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041760d2ad35dfd9d83b01ea19caeec7f32f0c8cb52819f9193ab9da497bcdba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 04:53:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041760d2ad35dfd9d83b01ea19caeec7f32f0c8cb52819f9193ab9da497bcdba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 04:53:17 localhost podman[283791]: 2025-12-05 09:53:17.626591155 +0000 UTC m=+0.225977628 container init e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_feynman, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, RELEASE=main, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Dec 5 04:53:17 localhost podman[283791]: 2025-12-05 09:53:17.669831787 +0000 UTC m=+0.269218250 container start e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_feynman, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=1763362218, name=rhceph, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:53:17 localhost podman[283791]: 2025-12-05 09:53:17.673497178 +0000 UTC m=+0.272883681 container attach e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_feynman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=) Dec 5 04:53:18 localhost elated_feynman[283807]: [ Dec 5 04:53:18 localhost elated_feynman[283807]: { Dec 5 04:53:18 localhost elated_feynman[283807]: "available": false, Dec 5 04:53:18 localhost elated_feynman[283807]: "ceph_device": false, Dec 5 04:53:18 localhost elated_feynman[283807]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 04:53:18 localhost elated_feynman[283807]: "lsm_data": {}, Dec 5 04:53:18 localhost elated_feynman[283807]: "lvs": [], Dec 5 04:53:18 localhost elated_feynman[283807]: "path": "/dev/sr0", Dec 5 04:53:18 localhost elated_feynman[283807]: "rejected_reasons": [ Dec 5 04:53:18 localhost elated_feynman[283807]: "Has a FileSystem", Dec 5 04:53:18 localhost elated_feynman[283807]: "Insufficient space (<5GB)" Dec 5 04:53:18 localhost elated_feynman[283807]: ], Dec 5 04:53:18 localhost elated_feynman[283807]: "sys_api": { Dec 5 04:53:18 localhost elated_feynman[283807]: "actuators": null, Dec 5 04:53:18 localhost elated_feynman[283807]: "device_nodes": "sr0", Dec 5 04:53:18 localhost elated_feynman[283807]: "human_readable_size": "482.00 KB", Dec 5 04:53:18 localhost elated_feynman[283807]: "id_bus": "ata", Dec 5 04:53:18 localhost elated_feynman[283807]: "model": "QEMU DVD-ROM", Dec 5 04:53:18 localhost elated_feynman[283807]: "nr_requests": "2", Dec 5 04:53:18 localhost elated_feynman[283807]: "partitions": {}, Dec 5 04:53:18 localhost elated_feynman[283807]: "path": "/dev/sr0", Dec 5 04:53:18 localhost elated_feynman[283807]: "removable": "1", Dec 5 04:53:18 localhost elated_feynman[283807]: "rev": "2.5+", Dec 5 04:53:18 localhost elated_feynman[283807]: "ro": "0", Dec 5 04:53:18 localhost elated_feynman[283807]: "rotational": "1", Dec 5 04:53:18 localhost elated_feynman[283807]: "sas_address": "", Dec 5 04:53:18 localhost elated_feynman[283807]: "sas_device_handle": "", Dec 5 04:53:18 localhost elated_feynman[283807]: "scheduler_mode": "mq-deadline", Dec 5 04:53:18 localhost elated_feynman[283807]: "sectors": 0, Dec 5 04:53:18 localhost elated_feynman[283807]: "sectorsize": "2048", Dec 5 04:53:18 localhost elated_feynman[283807]: "size": 493568.0, Dec 5 04:53:18 localhost elated_feynman[283807]: "support_discard": "0", Dec 5 04:53:18 localhost elated_feynman[283807]: "type": "disk", Dec 5 04:53:18 localhost elated_feynman[283807]: "vendor": "QEMU" Dec 5 04:53:18 localhost elated_feynman[283807]: } Dec 5 04:53:18 localhost elated_feynman[283807]: } Dec 5 04:53:18 localhost elated_feynman[283807]: ] Dec 5 04:53:18 localhost systemd[1]: libpod-e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73.scope: Deactivated successfully. Dec 5 04:53:18 localhost podman[283791]: 2025-12-05 09:53:18.679142724 +0000 UTC m=+1.278529237 container died e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_feynman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , version=7, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main) Dec 5 04:53:18 localhost systemd[1]: var-lib-containers-storage-overlay-041760d2ad35dfd9d83b01ea19caeec7f32f0c8cb52819f9193ab9da497bcdba-merged.mount: Deactivated successfully. Dec 5 04:53:18 localhost podman[285705]: 2025-12-05 09:53:18.788805967 +0000 UTC m=+0.098398050 container remove e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_feynman, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, distribution-scope=public, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git) Dec 5 04:53:18 localhost systemd[1]: libpod-conmon-e0e90fd8bef7f14ba8e68434b97357ffd63ffc7e9c006cbe1c2181307c5b6f73.scope: Deactivated successfully. Dec 5 04:53:18 localhost nova_compute[280228]: 2025-12-05 09:53:18.827 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:19 localhost podman[239519]: time="2025-12-05T09:53:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:53:19 localhost podman[239519]: @ - - [05/Dec/2025:09:53:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149884 "" "Go-http-client/1.1" Dec 5 04:53:19 localhost podman[239519]: @ - - [05/Dec/2025:09:53:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17722 "" "Go-http-client/1.1" Dec 5 04:53:20 localhost nova_compute[280228]: 2025-12-05 09:53:20.834 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:23 localhost nova_compute[280228]: 2025-12-05 09:53:23.830 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:53:24 localhost systemd[1]: tmp-crun.GrM88b.mount: Deactivated successfully. Dec 5 04:53:24 localhost podman[285720]: 2025-12-05 09:53:24.21540428 +0000 UTC m=+0.097186667 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:53:24 localhost podman[285720]: 2025-12-05 09:53:24.259551143 +0000 UTC m=+0.141333530 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible) Dec 5 04:53:24 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:53:24 localhost podman[285721]: 2025-12-05 09:53:24.280394828 +0000 UTC m=+0.160546995 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:53:24 localhost podman[285721]: 2025-12-05 09:53:24.29678452 +0000 UTC m=+0.176936727 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:53:24 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:53:25 localhost systemd[1]: tmp-crun.LQm45P.mount: Deactivated successfully. Dec 5 04:53:25 localhost systemd[1]: session-62.scope: Deactivated successfully. Dec 5 04:53:25 localhost systemd[1]: session-62.scope: Consumed 1.314s CPU time. Dec 5 04:53:25 localhost systemd-logind[760]: Session 62 logged out. Waiting for processes to exit. Dec 5 04:53:25 localhost systemd-logind[760]: Removed session 62. Dec 5 04:53:25 localhost nova_compute[280228]: 2025-12-05 09:53:25.837 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:27 localhost openstack_network_exporter[241668]: ERROR 09:53:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:53:27 localhost openstack_network_exporter[241668]: ERROR 09:53:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:53:27 localhost openstack_network_exporter[241668]: ERROR 09:53:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:53:27 localhost openstack_network_exporter[241668]: ERROR 09:53:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:53:27 localhost openstack_network_exporter[241668]: Dec 5 04:53:27 localhost openstack_network_exporter[241668]: ERROR 09:53:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:53:27 localhost openstack_network_exporter[241668]: Dec 5 04:53:28 localhost nova_compute[280228]: 2025-12-05 09:53:28.832 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:30 localhost nova_compute[280228]: 2025-12-05 09:53:30.888 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:33 localhost nova_compute[280228]: 2025-12-05 09:53:33.834 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:35 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 5 04:53:35 localhost systemd[282370]: Activating special unit Exit the Session... Dec 5 04:53:35 localhost systemd[282370]: Stopped target Main User Target. Dec 5 04:53:35 localhost systemd[282370]: Stopped target Basic System. Dec 5 04:53:35 localhost systemd[282370]: Stopped target Paths. Dec 5 04:53:35 localhost systemd[282370]: Stopped target Sockets. Dec 5 04:53:35 localhost systemd[282370]: Stopped target Timers. Dec 5 04:53:35 localhost systemd[282370]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 5 04:53:35 localhost systemd[282370]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 04:53:35 localhost systemd[282370]: Closed D-Bus User Message Bus Socket. Dec 5 04:53:35 localhost systemd[282370]: Stopped Create User's Volatile Files and Directories. Dec 5 04:53:35 localhost systemd[282370]: Removed slice User Application Slice. Dec 5 04:53:35 localhost systemd[282370]: Reached target Shutdown. Dec 5 04:53:35 localhost systemd[282370]: Finished Exit the Session. Dec 5 04:53:35 localhost systemd[282370]: Reached target Exit the Session. Dec 5 04:53:35 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 5 04:53:35 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 5 04:53:35 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 5 04:53:35 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 5 04:53:35 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 5 04:53:35 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 5 04:53:35 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 5 04:53:35 localhost systemd[1]: user-1003.slice: Consumed 1.684s CPU time. Dec 5 04:53:35 localhost nova_compute[280228]: 2025-12-05 09:53:35.941 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:53:37 localhost podman[285855]: 2025-12-05 09:53:37.20789486 +0000 UTC m=+0.090333721 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 04:53:37 localhost podman[285855]: 2025-12-05 09:53:37.251888079 +0000 UTC m=+0.134326920 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:53:37 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:53:38 localhost nova_compute[280228]: 2025-12-05 09:53:38.838 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:40 localhost nova_compute[280228]: 2025-12-05 09:53:40.982 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:53:41 localhost podman[285874]: 2025-12-05 09:53:41.181359259 +0000 UTC m=+0.066796274 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 04:53:41 localhost podman[285874]: 2025-12-05 09:53:41.19505297 +0000 UTC m=+0.080490015 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:53:41 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:53:43 localhost nova_compute[280228]: 2025-12-05 09:53:43.842 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:46 localhost nova_compute[280228]: 2025-12-05 09:53:46.005 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:53:46 localhost podman[285947]: 2025-12-05 09:53:46.434228193 +0000 UTC m=+0.087270229 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:53:46 localhost podman[285947]: 2025-12-05 09:53:46.468049867 +0000 UTC m=+0.121091883 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:53:46 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:53:46 localhost podman[285970]: 2025-12-05 09:53:46.561412557 +0000 UTC m=+0.090781984 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 5 04:53:46 localhost podman[285970]: 2025-12-05 09:53:46.566147359 +0000 UTC m=+0.095516766 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 04:53:46 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:53:46 localhost podman[285971]: 2025-12-05 09:53:46.622947022 +0000 UTC m=+0.150175094 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 5 04:53:46 localhost podman[285971]: 2025-12-05 09:53:46.634656724 +0000 UTC m=+0.161884796 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Dec 5 04:53:46 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:53:48 localhost nova_compute[280228]: 2025-12-05 09:53:48.845 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:49 localhost podman[239519]: time="2025-12-05T09:53:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:53:49 localhost podman[239519]: @ - - [05/Dec/2025:09:53:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149884 "" "Go-http-client/1.1" Dec 5 04:53:49 localhost podman[239519]: @ - - [05/Dec/2025:09:53:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17723 "" "Go-http-client/1.1" Dec 5 04:53:51 localhost nova_compute[280228]: 2025-12-05 09:53:51.047 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:53 localhost nova_compute[280228]: 2025-12-05 09:53:53.848 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:53:55 localhost podman[286006]: 2025-12-05 09:53:55.194040621 +0000 UTC m=+0.084600608 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:53:55 localhost podman[286007]: 2025-12-05 09:53:55.250040441 +0000 UTC m=+0.133538416 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:53:55 localhost podman[286006]: 2025-12-05 09:53:55.258276278 +0000 UTC m=+0.148836265 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:53:55 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:53:55 localhost podman[286007]: 2025-12-05 09:53:55.28567375 +0000 UTC m=+0.169171695 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:53:55 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:53:56 localhost nova_compute[280228]: 2025-12-05 09:53:56.099 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:53:57 localhost openstack_network_exporter[241668]: ERROR 09:53:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:53:57 localhost openstack_network_exporter[241668]: ERROR 09:53:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:53:57 localhost openstack_network_exporter[241668]: ERROR 09:53:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:53:57 localhost openstack_network_exporter[241668]: ERROR 09:53:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:53:57 localhost openstack_network_exporter[241668]: Dec 5 04:53:57 localhost openstack_network_exporter[241668]: ERROR 09:53:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:53:57 localhost openstack_network_exporter[241668]: Dec 5 04:53:58 localhost nova_compute[280228]: 2025-12-05 09:53:58.851 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:00 localhost nova_compute[280228]: 2025-12-05 09:54:00.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:00 localhost nova_compute[280228]: 2025-12-05 09:54:00.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 04:54:00 localhost nova_compute[280228]: 2025-12-05 09:54:00.526 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 04:54:00 localhost nova_compute[280228]: 2025-12-05 09:54:00.526 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:00 localhost nova_compute[280228]: 2025-12-05 09:54:00.527 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 04:54:00 localhost nova_compute[280228]: 2025-12-05 09:54:00.540 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:01 localhost nova_compute[280228]: 2025-12-05 09:54:01.137 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:01 localhost nova_compute[280228]: 2025-12-05 09:54:01.551 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:02 localhost nova_compute[280228]: 2025-12-05 09:54:02.503 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.529 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.530 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.853 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:54:03.901 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:54:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:54:03.901 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:54:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:54:03.902 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:54:03 localhost nova_compute[280228]: 2025-12-05 09:54:03.989 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.061 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.062 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.299 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.301 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=12284MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.301 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.302 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.520 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.521 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.521 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.622 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.651 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.652 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.672 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.737 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:54:04 localhost nova_compute[280228]: 2025-12-05 09:54:04.786 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:54:05 localhost nova_compute[280228]: 2025-12-05 09:54:05.302 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:54:05 localhost nova_compute[280228]: 2025-12-05 09:54:05.309 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:54:05 localhost nova_compute[280228]: 2025-12-05 09:54:05.331 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:54:05 localhost nova_compute[280228]: 2025-12-05 09:54:05.334 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:54:05 localhost nova_compute[280228]: 2025-12-05 09:54:05.334 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:54:06 localhost nova_compute[280228]: 2025-12-05 09:54:06.175 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.335 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.336 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.596 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.597 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.597 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:54:07 localhost nova_compute[280228]: 2025-12-05 09:54:07.597 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:54:08 localhost nova_compute[280228]: 2025-12-05 09:54:08.012 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:54:08 localhost nova_compute[280228]: 2025-12-05 09:54:08.031 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:54:08 localhost nova_compute[280228]: 2025-12-05 09:54:08.032 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:54:08 localhost nova_compute[280228]: 2025-12-05 09:54:08.033 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:54:08 localhost nova_compute[280228]: 2025-12-05 09:54:08.033 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:54:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:54:08 localhost podman[286117]: 2025-12-05 09:54:08.3739539 +0000 UTC m=+0.258808203 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:54:08 localhost podman[286117]: 2025-12-05 09:54:08.391494456 +0000 UTC m=+0.276348779 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 5 04:54:08 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:54:08 localhost nova_compute[280228]: 2025-12-05 09:54:08.856 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:11 localhost nova_compute[280228]: 2025-12-05 09:54:11.222 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:54:11 localhost podman[286223]: 2025-12-05 09:54:11.39606878 +0000 UTC m=+0.074624940 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Dec 5 04:54:11 localhost podman[286223]: 2025-12-05 09:54:11.432906684 +0000 UTC m=+0.111462834 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 04:54:11 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:54:11 localhost podman[286270]: Dec 5 04:54:11 localhost podman[286270]: 2025-12-05 09:54:11.594648535 +0000 UTC m=+0.064874687 container create b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_kalam, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Dec 5 04:54:11 localhost systemd[1]: Started libpod-conmon-b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255.scope. Dec 5 04:54:11 localhost systemd[1]: Started libcrun container. Dec 5 04:54:11 localhost podman[286270]: 2025-12-05 09:54:11.561902353 +0000 UTC m=+0.032128505 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:54:11 localhost podman[286270]: 2025-12-05 09:54:11.673107758 +0000 UTC m=+0.143333900 container init b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_kalam, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218) Dec 5 04:54:11 localhost podman[286270]: 2025-12-05 09:54:11.686807499 +0000 UTC m=+0.157033641 container start b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_kalam, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Dec 5 04:54:11 localhost podman[286270]: 2025-12-05 09:54:11.686998624 +0000 UTC m=+0.157224846 container attach b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_kalam, maintainer=Guillaume Abrioux , vcs-type=git, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container) Dec 5 04:54:11 localhost laughing_kalam[286286]: 167 167 Dec 5 04:54:11 localhost systemd[1]: libpod-b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255.scope: Deactivated successfully. Dec 5 04:54:11 localhost podman[286270]: 2025-12-05 09:54:11.691083367 +0000 UTC m=+0.161309509 container died b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_kalam, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, distribution-scope=public, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, release=1763362218, RELEASE=main) Dec 5 04:54:11 localhost podman[286291]: 2025-12-05 09:54:11.792631193 +0000 UTC m=+0.092786444 container remove b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_kalam, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, name=rhceph, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Dec 5 04:54:11 localhost systemd[1]: libpod-conmon-b8907491bb7b1418b47f4e398f53ed390b98d740f886f7bb76bf7d17fa426255.scope: Deactivated successfully. Dec 5 04:54:11 localhost systemd[1]: Reloading. Dec 5 04:54:11 localhost systemd-rc-local-generator[286336]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:54:11 localhost systemd-sysv-generator[286339]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: tmp-crun.CLNSHL.mount: Deactivated successfully. Dec 5 04:54:12 localhost systemd[1]: var-lib-containers-storage-overlay-ba75e083f7d412890916efd8287b5aec6c5f66572c3b39ead279c109eb24577d-merged.mount: Deactivated successfully. Dec 5 04:54:12 localhost systemd[1]: Reloading. Dec 5 04:54:12 localhost systemd-rc-local-generator[286376]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:54:12 localhost systemd-sysv-generator[286379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:12 localhost systemd[1]: Starting Ceph mgr.np0005546419.zhsnqq for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.947 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.949 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.978 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.980 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e8820aa2-43dc-4158-a976-3be7b2cdadf7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:12.949414', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a71812e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '581d46fcedda2577fba21e6441c456290c15281897bdfe6cbef946bed98506c6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:12.949414', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a719808-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': 'bd4b8a40311aa2435107e850f985eb0bed278cfee3fc6c6e71f540414be99923'}]}, 'timestamp': '2025-12-05 09:54:12.980766', '_unique_id': '997b7a345e004e4f825ad099d6785aca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.982 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.984 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.989 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b2e1560-69be-4cd0-906d-6d4b0c695d15', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:12.984716', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a72fc8e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '8fb89a4bd8e88024a6293145b5ac42f6b92cde78ca30c11dbc94f3e9ca6aa0a1'}]}, 'timestamp': '2025-12-05 09:54:12.990058', '_unique_id': 'd3d3c0b87f804222b62315525200fb8d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.991 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.992 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.992 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '22e600ab-b818-4bc0-97e4-9aadef6ed592', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:12.992657', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a737a9c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '6ab7549d9e800f5699e8302c196d91142a0344212f63e66d1bc9a77c1d56f2cb'}]}, 'timestamp': '2025-12-05 09:54:12.993122', '_unique_id': '1faf93bdc71b412aaed862738a75488c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.994 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.995 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.995 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8988fa73-9f3f-47fc-a31e-7a76401560a8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:12.995354', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a73e3ec-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '534f6c1cd0d874099ad78b52af48792c3f84833cf27d265982bf52e3aebc27a1'}]}, 'timestamp': '2025-12-05 09:54:12.995818', '_unique_id': 'e4e828ac36854861857067abd97a47b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.996 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.997 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.998 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.998 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '35c04367-7ae8-42ed-b2bf-cbd68c897656', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:12.998045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a744df0-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': 'f48220891cb17a42ba55d46dd7eac9864022913b4ccae142d7ca320f7fab786b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:12.998045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a745e1c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '3a5bf974523fcd0561f989d0dbd4adaf4a3aae21fcc1f12e61de648c0b7492a3'}]}, 'timestamp': '2025-12-05 09:54:12.998914', '_unique_id': 'c20a8e17516a47678b1bcddba20fcde6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:12.999 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.000 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.001 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '75ac57be-b012-48d5-ace7-d6cb5ffa2483', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.001084', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a74c4ec-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '9c2b2573173e81b6845e6c9e2d99290cc19247811d3ca29ee6362e9d1e640040'}]}, 'timestamp': '2025-12-05 09:54:13.001575', '_unique_id': 'f5761e8b9d8a4d86ae831e6e44c848c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.003 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.004 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.005 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '34537804-a5ac-42fd-923e-170931026db6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.005028', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a75662c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '0d4c69d6e5313eebb6f46ed40f88aac0df4c9e31ec40689ac5d7c4082f7c2c97'}]}, 'timestamp': '2025-12-05 09:54:13.005847', '_unique_id': '7db675e890664cb8a3c8041d2e10cdd0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.008 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.008 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.009 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cf47e713-6e3d-4b99-ad96-1a72aceaa6f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.008906', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a75f768-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '0677d25e957edf405c774ea3011829330283a4413c8713ed44c15e7c590750ce'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.008906', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a760b54-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': 'a7bb78fd2c814b87c90a09ebcb3a70fe3c6aa7b3a2f05569453781e771b6bfb5'}]}, 'timestamp': '2025-12-05 09:54:13.009960', '_unique_id': '587588540f934b2fbc8ca38b4741ee9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.011 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.012 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bbdaec4e-0652-4f36-b748-4f16f0eea2a0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.012779', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a7692e0-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '76734cf8101c1ebacbea7aa988639d3ac700990576c8aac973b808d24940ccde'}]}, 'timestamp': '2025-12-05 09:54:13.013598', '_unique_id': '3c73f3feee15409ea32e86cd0d708141'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.016 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bb1ea835-8e21-492e-9e2d-da1928a8596b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.016736', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a7930c2-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.191122303, 'message_signature': '8144692b43d3b5c1003b9e0f90babc387bc81f88a13c05f2177db77eca2dfbbc'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.016736', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a794670-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.191122303, 'message_signature': '795776327f06a6a4ce9d3f76400166fc82b851c42a64c6ec334ff89603ab9947'}]}, 'timestamp': '2025-12-05 09:54:13.031112', '_unique_id': '28478e540ad74b36aadf943e437a563e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.033 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.034 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:54:13 localhost podman[286436]: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.053 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f05e1b7e-8149-4f21-b165-2ae5ce9fbdf0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:54:13.034304', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '5a7cca20-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.22770209, 'message_signature': '9e6ed2b450e2964c15c117e3e9a5a2d0404331a23a5bf77006fff7ae240af3c6'}]}, 'timestamp': '2025-12-05 09:54:13.054296', '_unique_id': '6df4dc40bd724d429dbb1786c29fb533'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.058 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost podman[286436]: 2025-12-05 09:54:13.060670484 +0000 UTC m=+0.099698851 container create 1c2fb115c09c762b9a6e73a481e770fdbdf11967839ecde831ebdacc2e020a63 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3e5e57b9-fb07-4b4a-bed8-89e0ea81abae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.057690', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a7d6a8e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.191122303, 'message_signature': '0bb8ba923a0a940adba644758fe8dd468b31d1a10700257fa084b94e2cbc4d06'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.057690', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a7d80be-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.191122303, 'message_signature': 'aaea10eb512fd40fa72140180cda9e13bf68c9265e846df36f587fa9fb690ed5'}]}, 'timestamp': '2025-12-05 09:54:13.058812', '_unique_id': 'f76915178d824c5d8a2c22ccdf611be7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.061 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9067e333-bff3-40ce-b32a-3ba0ffb36131', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.061639', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a7e0714-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '43388391da3a9be1c8b9c55591160b5de315acbd5e6667a1ff46cd7e0e73db84'}]}, 'timestamp': '2025-12-05 09:54:13.062393', '_unique_id': '988237744fd748a4b9ed6ad8065bbe52'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.073 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 11770000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'add5e0e8-12f3-4126-b5ad-0b9dfd38c9ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 11770000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:54:13.073567', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '5a7fdaee-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.22770209, 'message_signature': 'fd928fc42aacbbeaa41c495e86120335287822a1b6b04077082e4a0ce9a684f6'}]}, 'timestamp': '2025-12-05 09:54:13.074467', '_unique_id': 'df0d214f5bd240dd9046ad8e805a3621'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.078 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.078 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6fd5d29-1578-42c4-aa47-ce8a88dadceb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.078145', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a80889a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': 'bf47e8dfe2269ad9db155b1b6f24a06fa123993a6a8a0106e28cbd1f5d5037d3'}]}, 'timestamp': '2025-12-05 09:54:13.078689', '_unique_id': '3f3a65b983db4022abd88aba03d7da09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.080 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.081 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.081 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '34af9e7e-0d3b-4a86-b603-ad7ac2a8aba8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.081095', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a80fae6-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '114b1bdacc8201588a92c13016b01702c2716b528a1872f12a9a9167a6a6996e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.081095', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a810bbc-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '4011b75cde25cd37c31c4a9996bf68cd378d4bd616ab34fafc44af7e18c02639'}]}, 'timestamp': '2025-12-05 09:54:13.082005', '_unique_id': '1d3697bedcf24728b77f610fd79745d7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.083 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.084 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.084 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd57ac163-1b94-4f91-89f9-6bec96b3c420', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.084337', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a81787c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '7fc99f1d628e2293a9c61f1f555dc23f8885599e71ddb3e589e348f8263895eb'}]}, 'timestamp': '2025-12-05 09:54:13.084818', '_unique_id': '9d46ec1358e044c1a25a156080b032e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.085 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.086 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.087 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.087 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.087 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9b59b524-0dfd-4fc2-bfd0-333259fa6e11', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.087170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a81e7a8-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '4ca4a22c473cb22ee6e6de36cf3488490113de4b9814101030bda1ea80d1c7ce'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.087170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a81f766-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '065b20a7b65a615ebcccd8272f0cead667330707668e177b0295894631c094bd'}]}, 'timestamp': '2025-12-05 09:54:13.088032', '_unique_id': '4eb86a416d5249caab6a609276d0940c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.088 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.090 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.090 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5cbb505c-04d7-4411-bbd9-ba34318ef1dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:54:13.090242', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '5a826048-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.159063732, 'message_signature': '3b98d265aec016de499db2e4e6d3ac083341b6d32a419df5a4dce532025a0bfc'}]}, 'timestamp': '2025-12-05 09:54:13.090750', '_unique_id': '0d70c152225e40179bb646f9c29bd5ab'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.091 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.092 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.093 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.093 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.093 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1df9d2c7-e623-4b11-8146-9c6cac0e86c8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.093165', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a82d2a8-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.191122303, 'message_signature': '15d3c3ea39514f3fac865759251460e3f214ca7a76f18d13a19cad9906f2cfff'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.093165', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a82e34c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.191122303, 'message_signature': 'c07176aaca78c3a27b7aa330916a95a6d1aff3496cea1867b89cb058756b968b'}]}, 'timestamp': '2025-12-05 09:54:13.094072', '_unique_id': '9d7f09e01c2241bd8204af3e23670245'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.095 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.096 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.096 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.096 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1730e59-5ccd-4b42-b0f9-65dfdb6f8487', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:54:13.096437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5a83519c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': '0d415c470ebed4d7a09f5b649f58c620bcbebb71926a7fe2c983e85e17c79f1c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:54:13.096437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5a83620e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11568.123722261, 'message_signature': 'c1a2b7f221a864b7f78072c1f7aacbb17d04d23a6a34402d9ef3ebe764a37a09'}]}, 'timestamp': '2025-12-05 09:54:13.097356', '_unique_id': '22a5b990926a46ab80b23ce2275b83e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.098 12 ERROR oslo_messaging.notify.messaging Dec 5 04:54:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:54:13.099 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:54:13 localhost systemd[1]: tmp-crun.cWXDey.mount: Deactivated successfully. Dec 5 04:54:13 localhost podman[286436]: 2025-12-05 09:54:13.017949373 +0000 UTC m=+0.056977750 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:54:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8cf671f6826c491ace88f8f54052f26bfcb8675cfec957000b3902bc31e8c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8cf671f6826c491ace88f8f54052f26bfcb8675cfec957000b3902bc31e8c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8cf671f6826c491ace88f8f54052f26bfcb8675cfec957000b3902bc31e8c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8cf671f6826c491ace88f8f54052f26bfcb8675cfec957000b3902bc31e8c9/merged/var/lib/ceph/mgr/ceph-np0005546419.zhsnqq supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:13 localhost podman[286436]: 2025-12-05 09:54:13.128579931 +0000 UTC m=+0.167608298 container init 1c2fb115c09c762b9a6e73a481e770fdbdf11967839ecde831ebdacc2e020a63 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vcs-type=git, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=) Dec 5 04:54:13 localhost podman[286436]: 2025-12-05 09:54:13.137115117 +0000 UTC m=+0.176143484 container start 1c2fb115c09c762b9a6e73a481e770fdbdf11967839ecde831ebdacc2e020a63 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:54:13 localhost bash[286436]: 1c2fb115c09c762b9a6e73a481e770fdbdf11967839ecde831ebdacc2e020a63 Dec 5 04:54:13 localhost systemd[1]: Started Ceph mgr.np0005546419.zhsnqq for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 04:54:13 localhost ceph-mgr[286454]: set uid:gid to 167:167 (ceph:ceph) Dec 5 04:54:13 localhost ceph-mgr[286454]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 5 04:54:13 localhost ceph-mgr[286454]: pidfile_write: ignore empty --pid-file Dec 5 04:54:13 localhost ceph-mgr[286454]: mgr[py] Loading python module 'alerts' Dec 5 04:54:13 localhost ceph-mgr[286454]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 5 04:54:13 localhost ceph-mgr[286454]: mgr[py] Loading python module 'balancer' Dec 5 04:54:13 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:13.324+0000 7f7d54ee8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 5 04:54:13 localhost ceph-mgr[286454]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 5 04:54:13 localhost ceph-mgr[286454]: mgr[py] Loading python module 'cephadm' Dec 5 04:54:13 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:13.393+0000 7f7d54ee8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 5 04:54:13 localhost nova_compute[280228]: 2025-12-05 09:54:13.860 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:13 localhost ceph-mgr[286454]: mgr[py] Loading python module 'crash' Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'dashboard' Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:14.050+0000 7f7d54ee8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'devicehealth' Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'diskprediction_local' Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:14.598+0000 7f7d54ee8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: from numpy import show_config as show_numpy_config Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:14.746+0000 7f7d54ee8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'influx' Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:14.806+0000 7f7d54ee8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'insights' Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'iostat' Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 5 04:54:14 localhost ceph-mgr[286454]: mgr[py] Loading python module 'k8sevents' Dec 5 04:54:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:14.921+0000 7f7d54ee8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'localpool' Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'mds_autoscaler' Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'mirroring' Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'nfs' Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'orchestrator' Dec 5 04:54:15 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:15.689+0000 7f7d54ee8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'osd_perf_query' Dec 5 04:54:15 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:15.836+0000 7f7d54ee8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:15.900+0000 7f7d54ee8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'osd_support' Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:15.956+0000 7f7d54ee8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 5 04:54:15 localhost ceph-mgr[286454]: mgr[py] Loading python module 'pg_autoscaler' Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:16.023+0000 7f7d54ee8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Loading python module 'progress' Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Loading python module 'prometheus' Dec 5 04:54:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:16.083+0000 7f7d54ee8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost nova_compute[280228]: 2025-12-05 09:54:16.262 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:16.390+0000 7f7d54ee8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Loading python module 'rbd_support' Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Loading python module 'restful' Dec 5 04:54:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:16.481+0000 7f7d54ee8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Loading python module 'rgw' Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:16.817+0000 7f7d54ee8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 5 04:54:16 localhost ceph-mgr[286454]: mgr[py] Loading python module 'rook' Dec 5 04:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:54:17 localhost podman[286484]: 2025-12-05 09:54:17.209192107 +0000 UTC m=+0.090405463 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:17.245+0000 7f7d54ee8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'selftest' Dec 5 04:54:17 localhost podman[286486]: 2025-12-05 09:54:17.273443304 +0000 UTC m=+0.147341041 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:17.306+0000 7f7d54ee8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'snap_schedule' Dec 5 04:54:17 localhost podman[286486]: 2025-12-05 09:54:17.30869077 +0000 UTC m=+0.182588477 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 04:54:17 localhost systemd[1]: tmp-crun.QyKtVO.mount: Deactivated successfully. Dec 5 04:54:17 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:54:17 localhost podman[286485]: 2025-12-05 09:54:17.333040341 +0000 UTC m=+0.210318649 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 5 04:54:17 localhost podman[286484]: 2025-12-05 09:54:17.341907237 +0000 UTC m=+0.223120663 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:54:17 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:54:17 localhost podman[286485]: 2025-12-05 09:54:17.365578936 +0000 UTC m=+0.242857204 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'stats' Dec 5 04:54:17 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'status' Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'telegraf' Dec 5 04:54:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:17.499+0000 7f7d54ee8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'telemetry' Dec 5 04:54:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:17.557+0000 7f7d54ee8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'test_orchestrator' Dec 5 04:54:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:17.690+0000 7f7d54ee8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 5 04:54:17 localhost ceph-mgr[286454]: mgr[py] Loading python module 'volumes' Dec 5 04:54:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:17.840+0000 7f7d54ee8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 5 04:54:18 localhost ceph-mgr[286454]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 5 04:54:18 localhost ceph-mgr[286454]: mgr[py] Loading python module 'zabbix' Dec 5 04:54:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:18.038+0000 7f7d54ee8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 5 04:54:18 localhost ceph-mgr[286454]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 5 04:54:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:54:18.101+0000 7f7d54ee8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 5 04:54:18 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557f9a90b1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 5 04:54:18 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1502489571 Dec 5 04:54:18 localhost nova_compute[280228]: 2025-12-05 09:54:18.863 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:19 localhost podman[239519]: time="2025-12-05T09:54:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:54:19 localhost podman[239519]: @ - - [05/Dec/2025:09:54:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152020 "" "Go-http-client/1.1" Dec 5 04:54:19 localhost podman[239519]: @ - - [05/Dec/2025:09:54:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18208 "" "Go-http-client/1.1" Dec 5 04:54:20 localhost podman[286667]: 2025-12-05 09:54:20.169964826 +0000 UTC m=+0.241697641 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public) Dec 5 04:54:20 localhost podman[286667]: 2025-12-05 09:54:20.274237823 +0000 UTC m=+0.345970698 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, release=1763362218, distribution-scope=public, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:54:21 localhost nova_compute[280228]: 2025-12-05 09:54:21.319 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:23 localhost nova_compute[280228]: 2025-12-05 09:54:23.864 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:54:25 localhost systemd[1]: tmp-crun.NcB38v.mount: Deactivated successfully. Dec 5 04:54:25 localhost podman[287305]: 2025-12-05 09:54:25.461378245 +0000 UTC m=+0.102191096 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:54:25 localhost podman[287305]: 2025-12-05 09:54:25.498958503 +0000 UTC m=+0.139771304 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible) Dec 5 04:54:25 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:54:25 localhost podman[287306]: 2025-12-05 09:54:25.562158978 +0000 UTC m=+0.202474754 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:54:25 localhost podman[287306]: 2025-12-05 09:54:25.574537589 +0000 UTC m=+0.214853395 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:54:25 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:54:26 localhost nova_compute[280228]: 2025-12-05 09:54:26.354 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:26 localhost systemd[1]: tmp-crun.YTgxVz.mount: Deactivated successfully. Dec 5 04:54:27 localhost openstack_network_exporter[241668]: ERROR 09:54:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:54:27 localhost openstack_network_exporter[241668]: ERROR 09:54:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:54:27 localhost openstack_network_exporter[241668]: ERROR 09:54:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:54:27 localhost openstack_network_exporter[241668]: ERROR 09:54:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:54:27 localhost openstack_network_exporter[241668]: Dec 5 04:54:27 localhost openstack_network_exporter[241668]: ERROR 09:54:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:54:27 localhost openstack_network_exporter[241668]: Dec 5 04:54:28 localhost nova_compute[280228]: 2025-12-05 09:54:28.869 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:31 localhost nova_compute[280228]: 2025-12-05 09:54:31.390 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:33 localhost nova_compute[280228]: 2025-12-05 09:54:33.974 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:35 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557f9a90b1e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 5 04:54:36 localhost nova_compute[280228]: 2025-12-05 09:54:36.451 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:38 localhost nova_compute[280228]: 2025-12-05 09:54:38.977 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:54:39 localhost systemd[1]: tmp-crun.dkXcZy.mount: Deactivated successfully. Dec 5 04:54:39 localhost podman[287550]: 2025-12-05 09:54:39.21352094 +0000 UTC m=+0.098567877 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:54:39 localhost podman[287550]: 2025-12-05 09:54:39.228927093 +0000 UTC m=+0.113974000 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., version=9.6, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Dec 5 04:54:39 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:54:41 localhost podman[287647]: Dec 5 04:54:41 localhost podman[287647]: 2025-12-05 09:54:41.385789901 +0000 UTC m=+0.077590698 container create fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_lovelace, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:54:41 localhost systemd[1]: Started libpod-conmon-fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55.scope. Dec 5 04:54:41 localhost systemd[1]: Started libcrun container. Dec 5 04:54:41 localhost podman[287647]: 2025-12-05 09:54:41.352895574 +0000 UTC m=+0.044696421 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:54:41 localhost nova_compute[280228]: 2025-12-05 09:54:41.495 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:41 localhost podman[287647]: 2025-12-05 09:54:41.516873213 +0000 UTC m=+0.208674010 container init fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_lovelace, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, release=1763362218, version=7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:54:41 localhost agitated_lovelace[287662]: 167 167 Dec 5 04:54:41 localhost podman[287647]: 2025-12-05 09:54:41.546945294 +0000 UTC m=+0.238746091 container start fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_lovelace, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:54:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:54:41 localhost systemd[1]: libpod-fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55.scope: Deactivated successfully. Dec 5 04:54:41 localhost podman[287647]: 2025-12-05 09:54:41.549609194 +0000 UTC m=+0.241410031 container attach fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_lovelace, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, RELEASE=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Dec 5 04:54:41 localhost podman[287647]: 2025-12-05 09:54:41.551684036 +0000 UTC m=+0.243484893 container died fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_lovelace, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:54:41 localhost podman[287668]: 2025-12-05 09:54:41.667657254 +0000 UTC m=+0.108444352 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:54:41 localhost podman[287667]: 2025-12-05 09:54:41.693032616 +0000 UTC m=+0.141453574 container remove fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_lovelace, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, release=1763362218) Dec 5 04:54:41 localhost systemd[1]: libpod-conmon-fa73f3078870f4a6fb0e24933cb3a769cd2c2923cdbfad98386679c070aa3d55.scope: Deactivated successfully. Dec 5 04:54:41 localhost podman[287668]: 2025-12-05 09:54:41.708580982 +0000 UTC m=+0.149368080 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:54:41 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:54:41 localhost podman[287703]: Dec 5 04:54:41 localhost podman[287703]: 2025-12-05 09:54:41.852500578 +0000 UTC m=+0.116856095 container create 143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_brattain, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:54:41 localhost podman[287703]: 2025-12-05 09:54:41.782926392 +0000 UTC m=+0.047281929 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:54:41 localhost systemd[1]: Started libpod-conmon-143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934.scope. Dec 5 04:54:41 localhost systemd[1]: Started libcrun container. Dec 5 04:54:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02683bef9418e6eabc59e5d3232528fe39e6c417a6bcaa0f8216b65ef0bcd194/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02683bef9418e6eabc59e5d3232528fe39e6c417a6bcaa0f8216b65ef0bcd194/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02683bef9418e6eabc59e5d3232528fe39e6c417a6bcaa0f8216b65ef0bcd194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02683bef9418e6eabc59e5d3232528fe39e6c417a6bcaa0f8216b65ef0bcd194/merged/var/lib/ceph/mon/ceph-np0005546419 supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:41 localhost podman[287703]: 2025-12-05 09:54:41.949039624 +0000 UTC m=+0.213395161 container init 143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_brattain, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc.) Dec 5 04:54:41 localhost podman[287703]: 2025-12-05 09:54:41.959995223 +0000 UTC m=+0.224350730 container start 143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_brattain, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:54:41 localhost podman[287703]: 2025-12-05 09:54:41.960786246 +0000 UTC m=+0.225141793 container attach 143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_brattain, io.buildah.version=1.41.4, GIT_BRANCH=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Dec 5 04:54:42 localhost systemd[1]: libpod-143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934.scope: Deactivated successfully. Dec 5 04:54:42 localhost podman[287703]: 2025-12-05 09:54:42.05393651 +0000 UTC m=+0.318292027 container died 143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_brattain, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Dec 5 04:54:42 localhost podman[287744]: 2025-12-05 09:54:42.151933739 +0000 UTC m=+0.085473164 container remove 143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_brattain, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, release=1763362218, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4) Dec 5 04:54:42 localhost systemd[1]: libpod-conmon-143b893ce70cb80e9547f6047dd0001eb6913f4d97f8637163367c76cbd8d934.scope: Deactivated successfully. Dec 5 04:54:42 localhost systemd[1]: Reloading. Dec 5 04:54:42 localhost systemd-rc-local-generator[287784]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:54:42 localhost systemd-sysv-generator[287787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:54:42 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557f9a90af20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: var-lib-containers-storage-overlay-1c2b444a3d7958e615e66925796206184341685072d30c36ba49b55cdc2720cf-merged.mount: Deactivated successfully. Dec 5 04:54:42 localhost systemd[1]: Reloading. Dec 5 04:54:42 localhost systemd-sysv-generator[287829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:54:42 localhost systemd-rc-local-generator[287824]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:54:43 localhost systemd[1]: Starting Ceph mon.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 04:54:43 localhost podman[287890]: Dec 5 04:54:43 localhost podman[287890]: 2025-12-05 09:54:43.440502946 +0000 UTC m=+0.083031861 container create 0cfef878df0af3f9a68a1c52bbf36c638e2425750f43b5356e84010e9861b8b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7) Dec 5 04:54:43 localhost podman[287890]: 2025-12-05 09:54:43.406757864 +0000 UTC m=+0.049286819 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:54:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87d87e094c1b02bd4c71e82556df90df7d735cee042e49444e5ca003ddaf8f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87d87e094c1b02bd4c71e82556df90df7d735cee042e49444e5ca003ddaf8f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87d87e094c1b02bd4c71e82556df90df7d735cee042e49444e5ca003ddaf8f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87d87e094c1b02bd4c71e82556df90df7d735cee042e49444e5ca003ddaf8f0/merged/var/lib/ceph/mon/ceph-np0005546419 supports timestamps until 2038 (0x7fffffff) Dec 5 04:54:43 localhost podman[287890]: 2025-12-05 09:54:43.515634109 +0000 UTC m=+0.158163024 container init 0cfef878df0af3f9a68a1c52bbf36c638e2425750f43b5356e84010e9861b8b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, RELEASE=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 5 04:54:43 localhost podman[287890]: 2025-12-05 09:54:43.524698551 +0000 UTC m=+0.167227466 container start 0cfef878df0af3f9a68a1c52bbf36c638e2425750f43b5356e84010e9861b8b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218) Dec 5 04:54:43 localhost bash[287890]: 0cfef878df0af3f9a68a1c52bbf36c638e2425750f43b5356e84010e9861b8b0 Dec 5 04:54:43 localhost systemd[1]: Started Ceph mon.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 04:54:43 localhost ceph-mon[287909]: set uid:gid to 167:167 (ceph:ceph) Dec 5 04:54:43 localhost ceph-mon[287909]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 5 04:54:43 localhost ceph-mon[287909]: pidfile_write: ignore empty --pid-file Dec 5 04:54:43 localhost ceph-mon[287909]: load: jerasure load: lrc Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: RocksDB version: 7.9.2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Git sha 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: DB SUMMARY Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: DB Session ID: N486XW7RA1QDBNFF474Z Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: CURRENT file: CURRENT Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: IDENTITY file: IDENTITY Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005546419/store.db dir, Total Num: 0, files: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005546419/store.db: 000004.log size: 886 ; Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.error_if_exists: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.create_if_missing: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.paranoid_checks: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.env: 0x562e998b29e0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.fs: PosixFileSystem Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.info_log: 0x562e9a126d20 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_file_opening_threads: 16 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.statistics: (nil) Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.use_fsync: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_log_file_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.log_file_time_to_roll: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.keep_log_file_num: 1000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.recycle_log_file_num: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.allow_fallocate: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.allow_mmap_reads: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.allow_mmap_writes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.use_direct_reads: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.create_missing_column_families: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.db_log_dir: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.wal_dir: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.table_cache_numshardbits: 6 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.advise_random_on_open: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.db_write_buffer_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.write_buffer_manager: 0x562e9a137540 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.use_adaptive_mutex: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.rate_limiter: (nil) Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.wal_recovery_mode: 2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.enable_thread_tracking: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.enable_pipelined_write: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.unordered_write: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.row_cache: None Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.wal_filter: None Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.allow_ingest_behind: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.two_write_queues: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.manual_wal_flush: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.wal_compression: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.atomic_flush: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.persist_stats_to_disk: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.log_readahead_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.best_efforts_recovery: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.allow_data_in_errors: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.db_host_id: __hostname__ Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.enforce_single_del_contracts: true Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_background_jobs: 2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_background_compactions: -1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_subcompactions: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.delayed_write_rate : 16777216 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_total_wal_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.stats_dump_period_sec: 600 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.stats_persist_period_sec: 600 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_open_files: -1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bytes_per_sync: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_readahead_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_background_flushes: -1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Compression algorithms supported: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kZSTD supported: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kXpressCompression supported: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kBZip2Compression supported: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kLZ4Compression supported: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kZlibCompression supported: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: #011kSnappyCompression supported: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: DMutex implementation: pthread_mutex_t Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005546419/store.db/MANIFEST-000005 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.merge_operator: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_filter: None Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_filter_factory: None Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.sst_partitioner_factory: None Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562e9a126980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562e9a123350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.write_buffer_size: 33554432 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_write_buffer_number: 2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression: NoCompression Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression: Disabled Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.prefix_extractor: nullptr Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.num_levels: 7 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.level: 32767 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.enabled: false Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.arena_block_size: 1048576 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.table_properties_collectors: Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.inplace_update_support: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.bloom_locality: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.max_successive_merges: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.force_consistency_checks: 1 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.ttl: 2592000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.enable_blob_files: false Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.min_blob_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.blob_file_size: 268435456 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005546419/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 08658599-4974-4077-8148-d94e3d2f3159 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928483581770, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928483584318, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928483, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "08658599-4974-4077-8148-d94e3d2f3159", "db_session_id": "N486XW7RA1QDBNFF474Z", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928483584515, "job": 1, "event": "recovery_finished"} Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562e9a14ae00 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: DB pointer 0x562e9a240000 Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:54:43 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562e9a123350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419 does not exist in monmap, will attempt to join an existing cluster Dec 5 04:54:43 localhost ceph-mon[287909]: using public_addr v2:172.18.0.106:0/0 -> [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] Dec 5 04:54:43 localhost ceph-mon[287909]: starting mon.np0005546419 rank -1 at public addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] at bind addrs [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005546419 fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(???) e0 preinit fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing) e5 sync_obtain_latest_monmap Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing) e5 sync_obtain_latest_monmap obtained monmap e5 Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).mds e16 new map Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-05T08:10:30.749420+0000#012modified#0112025-12-05T09:53:37.952087+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01184#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26492}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26492 members: 26492#012[mds.mds.np0005546420.eqhasr{0:26492} state up:active seq 16 addr [v2:172.18.0.107:6808/530338393,v1:172.18.0.107:6809/530338393] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005546419.rweotn{-1:16917} state up:standby seq 1 addr [v2:172.18.0.106:6808/2431590011,v1:172.18.0.106:6809/2431590011] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005546421.tuudjq{-1:26486} state up:standby seq 1 addr [v2:172.18.0.108:6808/812129975,v1:172.18.0.108:6809/812129975] compat {c=[1],r=[1],i=[17ff]}] Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mgr to host np0005546419.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mgr to host np0005546420.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mgr to host np0005546421.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 5 04:54:43 localhost ceph-mon[287909]: Saving service mgr spec with placement label:mgr Dec 5 04:54:43 localhost ceph-mon[287909]: Deploying daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).osd e84 crush map has features 3314933000852226048, adjusting msgr requires Dec 5 04:54:43 localhost ceph-mon[287909]: Deploying daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mon to host np0005546415.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label _admin to host np0005546415.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 5 04:54:43 localhost ceph-mon[287909]: Deploying daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mon to host np0005546416.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label _admin to host np0005546416.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mon to host np0005546418.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label _admin to host np0005546418.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mon to host np0005546419.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: Added label _admin to host np0005546419.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mon to host np0005546420.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label _admin to host np0005546420.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:43 localhost ceph-mon[287909]: Added label mon to host np0005546421.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Added label _admin to host np0005546421.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:43 localhost ceph-mon[287909]: Saving service mon spec with placement label:mon Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:54:43 localhost ceph-mon[287909]: Deploying daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: Deploying daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546415 calling monitor election Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546416 calling monitor election Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546418 calling monitor election Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546421 calling monitor election Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416,np0005546421 in quorum (ranks 0,1,2,3) Dec 5 04:54:43 localhost ceph-mon[287909]: overall HEALTH_OK Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:43 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:54:43 localhost ceph-mon[287909]: Deploying daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:54:43 localhost ceph-mon[287909]: mon.np0005546419@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Dec 5 04:54:43 localhost nova_compute[280228]: 2025-12-05 09:54:43.980 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:46 localhost nova_compute[280228]: 2025-12-05 09:54:46.523 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:47 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557f9a90b600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 5 04:54:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:54:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:54:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:54:47 localhost systemd[1]: tmp-crun.3FrZzy.mount: Deactivated successfully. Dec 5 04:54:48 localhost podman[287968]: 2025-12-05 09:54:48.023742225 +0000 UTC m=+0.088701711 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 04:54:48 localhost podman[287968]: 2025-12-05 09:54:48.032817998 +0000 UTC m=+0.097777524 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:54:48 localhost podman[287966]: 2025-12-05 09:54:47.992600091 +0000 UTC m=+0.068066452 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:54:48 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:54:48 localhost podman[287966]: 2025-12-05 09:54:48.07522346 +0000 UTC m=+0.150689781 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:54:48 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:54:48 localhost podman[287967]: 2025-12-05 09:54:48.132522148 +0000 UTC m=+0.201483534 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 5 04:54:48 localhost podman[287967]: 2025-12-05 09:54:48.165675502 +0000 UTC m=+0.234636918 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent) Dec 5 04:54:48 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:54:48 localhost systemd[1]: tmp-crun.78RelX.mount: Deactivated successfully. Dec 5 04:54:49 localhost nova_compute[280228]: 2025-12-05 09:54:49.022 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:49 localhost podman[288133]: 2025-12-05 09:54:49.087453888 +0000 UTC m=+0.091428263 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:54:49 localhost podman[288133]: 2025-12-05 09:54:49.168719576 +0000 UTC m=+0.172693891 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main) Dec 5 04:54:49 localhost ceph-mon[287909]: mon.np0005546419@-1(probing) e6 my rank is now 5 (was -1) Dec 5 04:54:49 localhost ceph-mon[287909]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:54:49 localhost ceph-mon[287909]: paxos.5).electionLogic(0) init, first boot, initializing epoch at 1 Dec 5 04:54:49 localhost ceph-mon[287909]: mon.np0005546419@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:54:49 localhost podman[239519]: time="2025-12-05T09:54:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:54:49 localhost podman[239519]: @ - - [05/Dec/2025:09:54:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:54:49 localhost podman[239519]: @ - - [05/Dec/2025:09:54:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18689 "" "Go-http-client/1.1" Dec 5 04:54:51 localhost nova_compute[280228]: 2025-12-05 09:54:51.563 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:52 localhost ceph-mds[283215]: mds.beacon.mds.np0005546419.rweotn missed beacon ack from the monitors Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546419@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546419@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546419@5(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546419@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546419@5(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546415 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546418 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546416 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546421 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546420 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416 in quorum (ranks 0,1,2) Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546418 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: overall HEALTH_OK Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546415 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546416 calling monitor election Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416,np0005546421,np0005546420 in quorum (ranks 0,1,2,3,4) Dec 5 04:54:52 localhost ceph-mon[287909]: overall HEALTH_OK Dec 5 04:54:52 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:52 localhost ceph-mon[287909]: mon.np0005546419@5(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:54:52 localhost ceph-mon[287909]: mgrc update_daemon_metadata mon.np0005546419 metadata {addrs=[v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005546419.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005546419.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 5 04:54:54 localhost nova_compute[280228]: 2025-12-05 09:54:54.067 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546418 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546420 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546416 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546421 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546419 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546418 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546416 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546421 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: Health check failed: 2/6 mons down, quorum np0005546415,np0005546418,np0005546416,np0005546421 (MON_DOWN) Dec 5 04:54:54 localhost ceph-mon[287909]: overall HEALTH_OK Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546415 calling monitor election Dec 5 04:54:54 localhost ceph-mon[287909]: mon.np0005546415 is new leader, mons np0005546415,np0005546418,np0005546416,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3,4,5) Dec 5 04:54:54 localhost ceph-mon[287909]: Health check cleared: MON_DOWN (was: 2/6 mons down, quorum np0005546415,np0005546418,np0005546416,np0005546421) Dec 5 04:54:54 localhost ceph-mon[287909]: Cluster is now healthy Dec 5 04:54:54 localhost ceph-mon[287909]: overall HEALTH_OK Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:54 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:55 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546415.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:54:55 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:54:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:54:56 localhost systemd[1]: tmp-crun.oABQzk.mount: Deactivated successfully. Dec 5 04:54:56 localhost podman[288663]: 2025-12-05 09:54:56.145293547 +0000 UTC m=+0.203414832 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:54:56 localhost podman[288664]: 2025-12-05 09:54:56.097541064 +0000 UTC m=+0.153869865 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:54:56 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:56 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:56 localhost ceph-mon[287909]: Updating np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:56 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:56 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:56 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:54:56 localhost podman[288663]: 2025-12-05 09:54:56.224903924 +0000 UTC m=+0.283025239 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 04:54:56 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:54:56 localhost podman[288664]: 2025-12-05 09:54:56.277850102 +0000 UTC m=+0.334178933 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:54:56 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:54:56 localhost nova_compute[280228]: 2025-12-05 09:54:56.565 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:57 localhost openstack_network_exporter[241668]: ERROR 09:54:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:54:57 localhost openstack_network_exporter[241668]: ERROR 09:54:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:54:57 localhost openstack_network_exporter[241668]: ERROR 09:54:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:54:57 localhost openstack_network_exporter[241668]: ERROR 09:54:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:54:57 localhost openstack_network_exporter[241668]: Dec 5 04:54:57 localhost openstack_network_exporter[241668]: ERROR 09:54:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:54:57 localhost openstack_network_exporter[241668]: Dec 5 04:54:57 localhost ceph-mon[287909]: Reconfiguring mon.np0005546415 (monmap changed)... Dec 5 04:54:57 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546415 on np0005546415.localdomain Dec 5 04:54:57 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:57 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:57 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546415.knqtle", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:54:58 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546415.knqtle (monmap changed)... Dec 5 04:54:58 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546415.knqtle on np0005546415.localdomain Dec 5 04:54:58 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:58 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:58 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546415.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:54:58 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:59 localhost nova_compute[280228]: 2025-12-05 09:54:59.104 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:54:59 localhost ceph-mon[287909]: Reconfiguring crash.np0005546415 (monmap changed)... Dec 5 04:54:59 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546415 on np0005546415.localdomain Dec 5 04:54:59 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:59 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:54:59 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:00 localhost ceph-mon[287909]: Reconfiguring crash.np0005546416 (monmap changed)... Dec 5 04:55:00 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain Dec 5 04:55:00 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:55:00 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:55:00 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:01 localhost ceph-mon[287909]: Reconfiguring mon.np0005546416 (monmap changed)... Dec 5 04:55:01 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain Dec 5 04:55:01 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:55:01 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:55:01 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:01 localhost nova_compute[280228]: 2025-12-05 09:55:01.605 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:01 localhost ceph-mon[287909]: mon.np0005546419@5(peon).osd e84 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 5 04:55:01 localhost ceph-mon[287909]: mon.np0005546419@5(peon).osd e84 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 5 04:55:01 localhost ceph-mon[287909]: mon.np0005546419@5(peon).osd e85 e85: 6 total, 6 up, 6 in Dec 5 04:55:01 localhost systemd[1]: session-19.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd-logind[760]: Session 19 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd[1]: session-24.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd-logind[760]: Session 24 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd[1]: session-17.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-16.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-14.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-18.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-26.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-26.scope: Consumed 3min 24.734s CPU time. Dec 5 04:55:01 localhost systemd-logind[760]: Session 16 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd-logind[760]: Session 17 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd-logind[760]: Session 14 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd-logind[760]: Session 18 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd-logind[760]: Session 26 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd-logind[760]: Removed session 19. Dec 5 04:55:01 localhost systemd[1]: session-22.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-20.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd-logind[760]: Removed session 24. Dec 5 04:55:01 localhost systemd[1]: session-21.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-23.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd[1]: session-25.scope: Deactivated successfully. Dec 5 04:55:01 localhost systemd-logind[760]: Session 22 logged out. Waiting for processes to exit. Dec 5 04:55:01 localhost systemd-logind[760]: Session 20 logged out. Waiting for processes to exit. Dec 5 04:55:02 localhost systemd-logind[760]: Session 25 logged out. Waiting for processes to exit. Dec 5 04:55:02 localhost systemd-logind[760]: Session 21 logged out. Waiting for processes to exit. Dec 5 04:55:02 localhost systemd-logind[760]: Session 23 logged out. Waiting for processes to exit. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 17. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 16. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 14. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 18. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 26. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 22. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 20. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 21. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 23. Dec 5 04:55:02 localhost systemd-logind[760]: Removed session 25. Dec 5 04:55:02 localhost sshd[288710]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:55:02 localhost systemd-logind[760]: New session 64 of user ceph-admin. Dec 5 04:55:02 localhost systemd[1]: Started Session 64 of User ceph-admin. Dec 5 04:55:02 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)... Dec 5 04:55:02 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14120 172.18.0.103:0/1158089555' entity='mgr.np0005546415.knqtle' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:02 localhost ceph-mon[287909]: from='client.? 172.18.0.103:0/2547002603' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 04:55:02 localhost ceph-mon[287909]: Activating manager daemon np0005546418.garyvl Dec 5 04:55:02 localhost ceph-mon[287909]: from='client.? 172.18.0.103:0/2547002603' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 04:55:02 localhost ceph-mon[287909]: Manager daemon np0005546418.garyvl is now available Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/mirror_snapshot_schedule"} : dispatch Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/mirror_snapshot_schedule"} : dispatch Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/trash_purge_schedule"} : dispatch Dec 5 04:55:02 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546418.garyvl/trash_purge_schedule"} : dispatch Dec 5 04:55:03 localhost podman[288823]: 2025-12-05 09:55:03.365171965 +0000 UTC m=+0.084035071 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:55:03 localhost podman[288823]: 2025-12-05 09:55:03.495770272 +0000 UTC m=+0.214633328 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, RELEASE=main) Dec 5 04:55:03 localhost nova_compute[280228]: 2025-12-05 09:55:03.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:03 localhost nova_compute[280228]: 2025-12-05 09:55:03.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:03 localhost nova_compute[280228]: 2025-12-05 09:55:03.509 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:03 localhost ceph-mon[287909]: mon.np0005546419@5(peon).osd e85 _set_new_cache_sizes cache_size:1019634537 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:55:03.901 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:55:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:55:03.902 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:55:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:55:03.903 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:55:04 localhost ceph-mon[287909]: [05/Dec/2025:09:55:03] ENGINE Bus STARTING Dec 5 04:55:04 localhost ceph-mon[287909]: [05/Dec/2025:09:55:03] ENGINE Serving on http://172.18.0.105:8765 Dec 5 04:55:04 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:04 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:04 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:04 localhost nova_compute[280228]: 2025-12-05 09:55:04.143 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:04 localhost ceph-mon[287909]: [05/Dec/2025:09:55:03] ENGINE Serving on https://172.18.0.105:7150 Dec 5 04:55:05 localhost ceph-mon[287909]: [05/Dec/2025:09:55:03] ENGINE Bus STARTED Dec 5 04:55:05 localhost ceph-mon[287909]: [05/Dec/2025:09:55:03] ENGINE Client ('172.18.0.105', 37538) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.509 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.536 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.536 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.536 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.537 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:55:05 localhost nova_compute[280228]: 2025-12-05 09:55:05.537 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:55:05 localhost ceph-mon[287909]: mon.np0005546419@5(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 04:55:05 localhost ceph-mon[287909]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1198559030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.020 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.277 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.277 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546415", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd/host:np0005546415", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 04:55:06 localhost ceph-mon[287909]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:55:06 localhost ceph-mon[287909]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 04:55:06 localhost ceph-mon[287909]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:55:06 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:55:06 localhost ceph-mon[287909]: Updating np0005546415.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:06 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:06 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:06 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:06 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:06 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.509 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.511 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11840MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.511 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.512 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.579 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.579 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.580 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.624 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:06 localhost nova_compute[280228]: 2025-12-05 09:55:06.643 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:55:07 localhost nova_compute[280228]: 2025-12-05 09:55:07.088 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:55:07 localhost nova_compute[280228]: 2025-12-05 09:55:07.096 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:55:07 localhost nova_compute[280228]: 2025-12-05 09:55:07.114 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:55:07 localhost nova_compute[280228]: 2025-12-05 09:55:07.117 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:55:07 localhost nova_compute[280228]: 2025-12-05 09:55:07.118 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:07 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.114 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.134 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:55:08 localhost ceph-mon[287909]: mon.np0005546419@5(peon).osd e85 _set_new_cache_sizes cache_size:1020045370 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.858 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.859 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.859 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:55:08 localhost nova_compute[280228]: 2025-12-05 09:55:08.859 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546415.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:08 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:09 localhost nova_compute[280228]: 2025-12-05 09:55:09.166 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:09 localhost nova_compute[280228]: 2025-12-05 09:55:09.262 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:55:09 localhost nova_compute[280228]: 2025-12-05 09:55:09.287 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:55:09 localhost nova_compute[280228]: 2025-12-05 09:55:09.288 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:55:09 localhost nova_compute[280228]: 2025-12-05 09:55:09.289 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:55:09 localhost nova_compute[280228]: 2025-12-05 09:55:09.289 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:55:10 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:55:10 localhost podman[289766]: 2025-12-05 09:55:10.208986144 +0000 UTC m=+0.090840056 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:55:10 localhost podman[289766]: 2025-12-05 09:55:10.226801648 +0000 UTC m=+0.108655550 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6) Dec 5 04:55:10 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:55:11 localhost ceph-mon[287909]: Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:55:11 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:55:11 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:11 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:11 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:11 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:11 localhost nova_compute[280228]: 2025-12-05 09:55:11.675 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:12 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:55:12 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:55:12 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:12 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:12 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:12 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:55:12 localhost podman[289786]: 2025-12-05 09:55:12.203156633 +0000 UTC m=+0.093679192 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:55:12 localhost podman[289786]: 2025-12-05 09:55:12.221061139 +0000 UTC m=+0.111583708 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:55:12 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:55:12 localhost podman[289857]: Dec 5 04:55:12 localhost podman[289857]: 2025-12-05 09:55:12.981723774 +0000 UTC m=+0.077919559 container create c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_roentgen, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:55:13 localhost systemd[1]: Started libpod-conmon-c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657.scope. Dec 5 04:55:13 localhost systemd[1]: Started libcrun container. Dec 5 04:55:13 localhost podman[289857]: 2025-12-05 09:55:12.950407304 +0000 UTC m=+0.046603139 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:13 localhost podman[289857]: 2025-12-05 09:55:13.056491385 +0000 UTC m=+0.152687180 container init c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_roentgen, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux ) Dec 5 04:55:13 localhost podman[289857]: 2025-12-05 09:55:13.070759813 +0000 UTC m=+0.166955598 container start c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_roentgen, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True) Dec 5 04:55:13 localhost podman[289857]: 2025-12-05 09:55:13.07100371 +0000 UTC m=+0.167199495 container attach c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_roentgen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:55:13 localhost kind_roentgen[289872]: 167 167 Dec 5 04:55:13 localhost systemd[1]: libpod-c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657.scope: Deactivated successfully. Dec 5 04:55:13 localhost podman[289857]: 2025-12-05 09:55:13.07667471 +0000 UTC m=+0.172870535 container died c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_roentgen, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:55:13 localhost podman[289877]: 2025-12-05 09:55:13.172598348 +0000 UTC m=+0.085896707 container remove c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_roentgen, com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, RELEASE=main) Dec 5 04:55:13 localhost systemd[1]: libpod-conmon-c5d185960b352e3ae8b27c824e4326bab431ff1544b5453fe0cbfecac550b657.scope: Deactivated successfully. Dec 5 04:55:13 localhost systemd[1]: var-lib-containers-storage-overlay-5a9b9178fff02f9c16cf465aa5882c6d47f44b2be6e1e9d619050dcdbe4ebaf4-merged.mount: Deactivated successfully. Dec 5 04:55:13 localhost ceph-mon[287909]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:55:13 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:55:13 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:13 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:13 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:13 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:13 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:13 localhost ceph-mon[287909]: mon.np0005546419@5(peon).osd e85 _set_new_cache_sizes cache_size:1020054522 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:13 localhost podman[289945]: Dec 5 04:55:13 localhost podman[289945]: 2025-12-05 09:55:13.866120038 +0000 UTC m=+0.072770104 container create 5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hodgkin, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True) Dec 5 04:55:13 localhost systemd[1]: Started libpod-conmon-5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11.scope. Dec 5 04:55:13 localhost systemd[1]: Started libcrun container. Dec 5 04:55:13 localhost podman[289945]: 2025-12-05 09:55:13.8368222 +0000 UTC m=+0.043472306 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:13 localhost podman[289945]: 2025-12-05 09:55:13.939953533 +0000 UTC m=+0.146603619 container init 5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hodgkin, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1763362218, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container) Dec 5 04:55:13 localhost podman[289945]: 2025-12-05 09:55:13.946740326 +0000 UTC m=+0.153390402 container start 5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hodgkin, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:55:13 localhost podman[289945]: 2025-12-05 09:55:13.947006164 +0000 UTC m=+0.153656240 container attach 5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hodgkin, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=) Dec 5 04:55:13 localhost objective_hodgkin[289960]: 167 167 Dec 5 04:55:13 localhost systemd[1]: libpod-5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11.scope: Deactivated successfully. Dec 5 04:55:13 localhost podman[289945]: 2025-12-05 09:55:13.950685914 +0000 UTC m=+0.157336050 container died 5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hodgkin, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public) Dec 5 04:55:14 localhost podman[289965]: 2025-12-05 09:55:14.037449216 +0000 UTC m=+0.075664130 container remove 5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_hodgkin, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z) Dec 5 04:55:14 localhost systemd[1]: libpod-conmon-5309f5418b6efbcc2a466741e4b07cfa1beae6a6d7757b38194b72e511c8ae11.scope: Deactivated successfully. Dec 5 04:55:14 localhost systemd[1]: var-lib-containers-storage-overlay-39aff20649c878e7a59791617843ccc698a646bfab23a0ca40315c2436684fa4-merged.mount: Deactivated successfully. Dec 5 04:55:14 localhost nova_compute[280228]: 2025-12-05 09:55:14.214 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:14 localhost ceph-mon[287909]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:55:14 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:55:14 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:14 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:14 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:55:14 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:14 localhost podman[290042]: Dec 5 04:55:14 localhost podman[290042]: 2025-12-05 09:55:14.927976385 +0000 UTC m=+0.089070673 container create 30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_blackburn, ceph=True, RELEASE=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 5 04:55:14 localhost systemd[1]: Started libpod-conmon-30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec.scope. Dec 5 04:55:14 localhost systemd[1]: Started libcrun container. Dec 5 04:55:14 localhost podman[290042]: 2025-12-05 09:55:14.989415048 +0000 UTC m=+0.150509346 container init 30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_blackburn, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 5 04:55:14 localhost podman[290042]: 2025-12-05 09:55:14.896215703 +0000 UTC m=+0.057310071 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:15 localhost podman[290042]: 2025-12-05 09:55:15.001372586 +0000 UTC m=+0.162466854 container start 30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_blackburn, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, version=7, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux ) Dec 5 04:55:15 localhost podman[290042]: 2025-12-05 09:55:15.002266904 +0000 UTC m=+0.163361212 container attach 30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_blackburn, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=1763362218, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph) Dec 5 04:55:15 localhost optimistic_blackburn[290057]: 167 167 Dec 5 04:55:15 localhost systemd[1]: libpod-30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec.scope: Deactivated successfully. Dec 5 04:55:15 localhost podman[290042]: 2025-12-05 09:55:15.006097688 +0000 UTC m=+0.167191976 container died 30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_blackburn, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:55:15 localhost podman[290062]: 2025-12-05 09:55:15.100642764 +0000 UTC m=+0.084750353 container remove 30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_blackburn, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main) Dec 5 04:55:15 localhost systemd[1]: libpod-conmon-30f1716a9657a6489b05ef103589a8491f0d07c464ec915ad75dce0f4a9e56ec.scope: Deactivated successfully. Dec 5 04:55:15 localhost systemd[1]: var-lib-containers-storage-overlay-21b286921afb2494a3282d437c8bfe90fd6f32790328da86fe03c90f9da5753c-merged.mount: Deactivated successfully. Dec 5 04:55:15 localhost ceph-mon[287909]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:55:15 localhost ceph-mon[287909]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:55:15 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:15 localhost ceph-mon[287909]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:55:15 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:55:15 localhost ceph-mon[287909]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:55:16 localhost podman[290138]: Dec 5 04:55:16 localhost podman[290138]: 2025-12-05 09:55:16.025714138 +0000 UTC m=+0.081399232 container create 49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_faraday, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7) Dec 5 04:55:16 localhost systemd[1]: Started libpod-conmon-49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763.scope. Dec 5 04:55:16 localhost systemd[1]: Started libcrun container. Dec 5 04:55:16 localhost podman[290138]: 2025-12-05 09:55:15.992854852 +0000 UTC m=+0.048539986 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:16 localhost podman[290138]: 2025-12-05 09:55:16.100958715 +0000 UTC m=+0.156643779 container init 49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_faraday, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z) Dec 5 04:55:16 localhost podman[290138]: 2025-12-05 09:55:16.114293665 +0000 UTC m=+0.169978729 container start 49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_faraday, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218) Dec 5 04:55:16 localhost podman[290138]: 2025-12-05 09:55:16.114511242 +0000 UTC m=+0.170196326 container attach 49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_faraday, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main) Dec 5 04:55:16 localhost peaceful_faraday[290153]: 167 167 Dec 5 04:55:16 localhost systemd[1]: libpod-49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763.scope: Deactivated successfully. Dec 5 04:55:16 localhost podman[290138]: 2025-12-05 09:55:16.118649385 +0000 UTC m=+0.174334529 container died 49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_faraday, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git) Dec 5 04:55:16 localhost systemd[1]: var-lib-containers-storage-overlay-c410e9a5e44da22095c6ac4ef9448c2bfac87981b8a2cd2d87141d54464c9033-merged.mount: Deactivated successfully. Dec 5 04:55:16 localhost podman[290158]: 2025-12-05 09:55:16.225090058 +0000 UTC m=+0.092780254 container remove 49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_faraday, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, version=7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public) Dec 5 04:55:16 localhost systemd[1]: libpod-conmon-49651cd4922e7667d184abe018dc724080d7d0c9561f71065ad35ed1ec6b0763.scope: Deactivated successfully. Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:16 localhost ceph-mon[287909]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:55:16 localhost ceph-mon[287909]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:16 localhost ceph-mon[287909]: from='mgr.14190 ' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:16 localhost nova_compute[280228]: 2025-12-05 09:55:16.714 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:16 localhost podman[290227]: Dec 5 04:55:16 localhost podman[290227]: 2025-12-05 09:55:16.925277048 +0000 UTC m=+0.061229738 container create a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_chandrasekhar, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, version=7, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:55:16 localhost systemd[1]: Started libpod-conmon-a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0.scope. Dec 5 04:55:16 localhost systemd[1]: Started libcrun container. Dec 5 04:55:16 localhost podman[290227]: 2025-12-05 09:55:16.988611488 +0000 UTC m=+0.124564178 container init a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_chandrasekhar, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:55:17 localhost silly_chandrasekhar[290242]: 167 167 Dec 5 04:55:17 localhost podman[290227]: 2025-12-05 09:55:16.900234367 +0000 UTC m=+0.036187067 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:17 localhost systemd[1]: libpod-a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0.scope: Deactivated successfully. Dec 5 04:55:17 localhost podman[290227]: 2025-12-05 09:55:17.00104261 +0000 UTC m=+0.136995330 container start a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_chandrasekhar, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Dec 5 04:55:17 localhost podman[290227]: 2025-12-05 09:55:17.001721481 +0000 UTC m=+0.137674221 container attach a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_chandrasekhar, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public) Dec 5 04:55:17 localhost podman[290227]: 2025-12-05 09:55:17.003659019 +0000 UTC m=+0.139611719 container died a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_chandrasekhar, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:55:17 localhost podman[290247]: 2025-12-05 09:55:17.103310768 +0000 UTC m=+0.085138475 container remove a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_chandrasekhar, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, ceph=True) Dec 5 04:55:17 localhost systemd[1]: libpod-conmon-a659a1ec206ff7325a1b811e192cc2216e0256dc138a9ab9e99f4e1960e208d0.scope: Deactivated successfully. Dec 5 04:55:17 localhost systemd[1]: var-lib-containers-storage-overlay-b6704df0b9020b085772c6b9897725d24068953e234e26d7d39f063957176766-merged.mount: Deactivated successfully. Dec 5 04:55:17 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557f9a90b600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 5 04:55:17 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Dec 5 04:55:17 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Dec 5 04:55:17 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557fa43bc000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Dec 5 04:55:17 localhost ceph-mon[287909]: mon.np0005546419@5(peon) e7 my rank is now 4 (was 5) Dec 5 04:55:17 localhost ceph-mon[287909]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:55:17 localhost ceph-mon[287909]: paxos.4).electionLogic(32) init, last seen epoch 32 Dec 5 04:55:17 localhost ceph-mon[287909]: mon.np0005546419@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:55:17 localhost ceph-mon[287909]: mon.np0005546419@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:55:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:55:18 localhost podman[290317]: Dec 5 04:55:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:55:18 localhost podman[290317]: 2025-12-05 09:55:18.117177206 +0000 UTC m=+0.059632970 container create 1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_vaughan, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, ceph=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 5 04:55:18 localhost systemd[1]: Started libpod-conmon-1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd.scope. Dec 5 04:55:18 localhost podman[290317]: 2025-12-05 09:55:18.090414993 +0000 UTC m=+0.032870767 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:18 localhost systemd[1]: Started libcrun container. Dec 5 04:55:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:55:18 localhost podman[290317]: 2025-12-05 09:55:18.213083472 +0000 UTC m=+0.155539226 container init 1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_vaughan, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, release=1763362218, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.buildah.version=1.41.4) Dec 5 04:55:18 localhost podman[290317]: 2025-12-05 09:55:18.22334024 +0000 UTC m=+0.165796014 container start 1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_vaughan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, version=7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph) Dec 5 04:55:18 localhost podman[290317]: 2025-12-05 09:55:18.224085292 +0000 UTC m=+0.166541036 container attach 1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_vaughan, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:55:18 localhost pedantic_vaughan[290353]: 167 167 Dec 5 04:55:18 localhost podman[290330]: 2025-12-05 09:55:18.226506305 +0000 UTC m=+0.109299299 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:55:18 localhost systemd[1]: libpod-1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd.scope: Deactivated successfully. Dec 5 04:55:18 localhost podman[290329]: 2025-12-05 09:55:18.196306459 +0000 UTC m=+0.082666631 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:55:18 localhost podman[290329]: 2025-12-05 09:55:18.281640098 +0000 UTC m=+0.168000280 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:55:18 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:55:18 localhost podman[290330]: 2025-12-05 09:55:18.310606807 +0000 UTC m=+0.193399791 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:55:18 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:55:18 localhost systemd[1]: tmp-crun.N3bzoU.mount: Deactivated successfully. Dec 5 04:55:18 localhost podman[290361]: 2025-12-05 09:55:18.334202234 +0000 UTC m=+0.129844175 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:55:18 localhost podman[290361]: 2025-12-05 09:55:18.342505593 +0000 UTC m=+0.138147524 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Dec 5 04:55:18 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:55:18 localhost podman[290385]: 2025-12-05 09:55:18.397146883 +0000 UTC m=+0.152214337 container died 1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_vaughan, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:55:18 localhost podman[290385]: 2025-12-05 09:55:18.429486602 +0000 UTC m=+0.184554056 container remove 1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_vaughan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph) Dec 5 04:55:18 localhost systemd[1]: libpod-conmon-1f4fa8c250a0fe799adfebb08bc151ec615abf2fd20160e5ede041ad6b22cbdd.scope: Deactivated successfully. Dec 5 04:55:19 localhost systemd[1]: var-lib-containers-storage-overlay-4372def9c161815976f3b2f50347762955027e9b15ea7b01d138e29b2d623542-merged.mount: Deactivated successfully. Dec 5 04:55:19 localhost nova_compute[280228]: 2025-12-05 09:55:19.259 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:19 localhost podman[239519]: time="2025-12-05T09:55:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:55:19 localhost podman[239519]: @ - - [05/Dec/2025:09:55:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:55:19 localhost podman[239519]: @ - - [05/Dec/2025:09:55:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18699 "" "Go-http-client/1.1" Dec 5 04:55:21 localhost nova_compute[280228]: 2025-12-05 09:55:21.769 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546419@4(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546419@4(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:55:22 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:55:22 localhost ceph-mon[287909]: Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:55:22 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:22 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:55:22 localhost ceph-mon[287909]: Remove daemons mon.np0005546415 Dec 5 04:55:22 localhost ceph-mon[287909]: Safe to remove mon.np0005546415: new quorum should be ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420', 'np0005546419'] (from ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420', 'np0005546419']) Dec 5 04:55:22 localhost ceph-mon[287909]: Removing monitor np0005546415 from monmap... Dec 5 04:55:22 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "mon rm", "name": "np0005546415"} : dispatch Dec 5 04:55:22 localhost ceph-mon[287909]: Removing daemon mon.np0005546415 from np0005546415.localdomain -- ports [] Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546418 calling monitor election Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546420 calling monitor election Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546416 calling monitor election Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546419 calling monitor election Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546421 calling monitor election Dec 5 04:55:22 localhost ceph-mon[287909]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3,4) Dec 5 04:55:22 localhost ceph-mon[287909]: overall HEALTH_OK Dec 5 04:55:22 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:22 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:22 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:23 localhost ceph-mon[287909]: mon.np0005546419@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054727 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:24 localhost nova_compute[280228]: 2025-12-05 09:55:24.287 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:24 localhost ceph-mon[287909]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:55:24 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:55:25 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:25 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:25 localhost ceph-mon[287909]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:55:25 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:55:25 localhost ceph-mon[287909]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:55:26 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:26 localhost ceph-mon[287909]: Removed label mon from host np0005546415.localdomain Dec 5 04:55:26 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:26 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:26 localhost ceph-mon[287909]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:55:26 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:55:26 localhost ceph-mon[287909]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:55:26 localhost nova_compute[280228]: 2025-12-05 09:55:26.807 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:55:27 localhost openstack_network_exporter[241668]: ERROR 09:55:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:55:27 localhost openstack_network_exporter[241668]: ERROR 09:55:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:55:27 localhost openstack_network_exporter[241668]: ERROR 09:55:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:55:27 localhost openstack_network_exporter[241668]: ERROR 09:55:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:55:27 localhost openstack_network_exporter[241668]: Dec 5 04:55:27 localhost openstack_network_exporter[241668]: ERROR 09:55:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:55:27 localhost openstack_network_exporter[241668]: Dec 5 04:55:27 localhost podman[290411]: 2025-12-05 09:55:27.199632268 +0000 UTC m=+0.079197306 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:55:27 localhost podman[290411]: 2025-12-05 09:55:27.211668959 +0000 UTC m=+0.091234037 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:55:27 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:55:27 localhost podman[290410]: 2025-12-05 09:55:27.294495543 +0000 UTC m=+0.179180105 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 5 04:55:27 localhost podman[290410]: 2025-12-05 09:55:27.331786021 +0000 UTC m=+0.216470553 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:55:27 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:55:28 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:28 localhost ceph-mon[287909]: Removed label mgr from host np0005546415.localdomain Dec 5 04:55:28 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:28 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:28 localhost ceph-mon[287909]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:55:28 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:55:28 localhost ceph-mon[287909]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:55:28 localhost ceph-mon[287909]: mon.np0005546419@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:29 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:29 localhost nova_compute[280228]: 2025-12-05 09:55:29.326 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:30 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:55:30 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:55:30 localhost ceph-mon[287909]: Removed label _admin from host np0005546415.localdomain Dec 5 04:55:30 localhost ceph-mon[287909]: Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:55:30 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:55:30 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:30 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:30 localhost ceph-mon[287909]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:55:30 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:30 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:55:31 localhost nova_compute[280228]: 2025-12-05 09:55:31.858 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:32 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:32 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:32 localhost ceph-mon[287909]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:55:32 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:55:32 localhost ceph-mon[287909]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:55:33 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:33 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:33 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:55:34 localhost ceph-mon[287909]: mon.np0005546419@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:34 localhost ceph-mon[287909]: Reconfiguring osd.5 (monmap changed)... Dec 5 04:55:34 localhost ceph-mon[287909]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:55:34 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:34 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:34 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:55:34 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:34 localhost nova_compute[280228]: 2025-12-05 09:55:34.381 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:35 localhost ceph-mon[287909]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:55:35 localhost ceph-mon[287909]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:55:35 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:35 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:55:35 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:35 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:55:35 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:35 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:35 localhost ceph-mon[287909]: Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:55:35 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:35 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:55:36 localhost nova_compute[280228]: 2025-12-05 09:55:36.905 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:37 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:37 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:37 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:37 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:37 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:37 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:38 localhost ceph-mon[287909]: mon.np0005546419@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:39 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:39 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:39 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:55:39 localhost ceph-mon[287909]: Removing np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:39 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:39 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:39 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:39 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:39 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:55:39 localhost ceph-mon[287909]: Removing np0005546415.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:55:39 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:39 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:39 localhost nova_compute[280228]: 2025-12-05 09:55:39.383 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:40 localhost ceph-mon[287909]: Removing np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:55:40 localhost ceph-mon[287909]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:40 localhost ceph-mon[287909]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:40 localhost ceph-mon[287909]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:40 localhost ceph-mon[287909]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:40 localhost ceph-mon[287909]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:40 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:55:41 localhost ceph-mon[287909]: Added label _no_schedule to host np0005546415.localdomain Dec 5 04:55:41 localhost ceph-mon[287909]: Removing daemon crash.np0005546415 from np0005546415.localdomain -- ports [] Dec 5 04:55:41 localhost ceph-mon[287909]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546415.localdomain Dec 5 04:55:41 localhost systemd[1]: tmp-crun.x1KJS3.mount: Deactivated successfully. Dec 5 04:55:41 localhost podman[290777]: 2025-12-05 09:55:41.214535423 +0000 UTC m=+0.097957369 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 5 04:55:41 localhost podman[290777]: 2025-12-05 09:55:41.230601804 +0000 UTC m=+0.114023740 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible) Dec 5 04:55:41 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:55:41 localhost nova_compute[280228]: 2025-12-05 09:55:41.963 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:42 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth rm", "entity": "client.crash.np0005546415.localdomain"} : dispatch Dec 5 04:55:42 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005546415.localdomain"}]': finished Dec 5 04:55:42 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:55:43 localhost podman[290798]: 2025-12-05 09:55:43.191401412 +0000 UTC m=+0.080273078 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:55:43 localhost podman[290798]: 2025-12-05 09:55:43.228207186 +0000 UTC m=+0.117078862 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 04:55:43 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:55:43 localhost ceph-mon[287909]: Removing key for client.crash.np0005546415.localdomain Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:43 localhost ceph-mon[287909]: Removing daemon mgr.np0005546415.knqtle from np0005546415.localdomain -- ports [9283, 8765] Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain"} : dispatch Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain"}]': finished Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth rm", "entity": "mgr.np0005546415.knqtle"} : dispatch Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005546415.knqtle"}]': finished Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:43 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:43 localhost ceph-mon[287909]: mon.np0005546419@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:44 localhost ceph-mon[287909]: Removed host np0005546415.localdomain Dec 5 04:55:44 localhost ceph-mon[287909]: Removing key for mgr.np0005546415.knqtle Dec 5 04:55:44 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:55:44 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:44 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:44 localhost nova_compute[280228]: 2025-12-05 09:55:44.418 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:44 localhost sshd[290852]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:55:44 localhost systemd-logind[760]: New session 65 of user tripleo-admin. Dec 5 04:55:44 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 5 04:55:44 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 5 04:55:44 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 5 04:55:44 localhost systemd[1]: Starting User Manager for UID 1003... Dec 5 04:55:44 localhost systemd[290856]: Queued start job for default target Main User Target. Dec 5 04:55:44 localhost systemd[290856]: Created slice User Application Slice. Dec 5 04:55:44 localhost systemd[290856]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 04:55:44 localhost systemd[290856]: Started Daily Cleanup of User's Temporary Directories. Dec 5 04:55:44 localhost systemd[290856]: Reached target Paths. Dec 5 04:55:44 localhost systemd[290856]: Reached target Timers. Dec 5 04:55:44 localhost systemd[290856]: Starting D-Bus User Message Bus Socket... Dec 5 04:55:44 localhost systemd[290856]: Starting Create User's Volatile Files and Directories... Dec 5 04:55:44 localhost systemd[290856]: Finished Create User's Volatile Files and Directories. Dec 5 04:55:44 localhost systemd[290856]: Listening on D-Bus User Message Bus Socket. Dec 5 04:55:44 localhost systemd[290856]: Reached target Sockets. Dec 5 04:55:44 localhost systemd[290856]: Reached target Basic System. Dec 5 04:55:44 localhost systemd[290856]: Reached target Main User Target. Dec 5 04:55:44 localhost systemd[290856]: Startup finished in 166ms. Dec 5 04:55:44 localhost systemd[1]: Started User Manager for UID 1003. Dec 5 04:55:44 localhost systemd[1]: Started Session 65 of User tripleo-admin. Dec 5 04:55:45 localhost ceph-mon[287909]: Reconfiguring crash.np0005546416 (monmap changed)... Dec 5 04:55:45 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain Dec 5 04:55:45 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:45 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:45 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:45 localhost python3[290998]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.103/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 5 04:55:46 localhost ceph-mon[287909]: Reconfiguring mon.np0005546416 (monmap changed)... Dec 5 04:55:46 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain Dec 5 04:55:46 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:46 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:46 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:46 localhost python3[291144]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.103/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:55:47 localhost nova_compute[280228]: 2025-12-05 09:55:47.001 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:47 localhost python3[291289]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.103 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 04:55:47 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)... Dec 5 04:55:47 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain Dec 5 04:55:47 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:47 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:47 localhost ceph-mon[287909]: Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:55:47 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:55:47 localhost ceph-mon[287909]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:55:47 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:48 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:48 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:48 localhost ceph-mon[287909]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:55:48 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:55:48 localhost ceph-mon[287909]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:55:48 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:48 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:48 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:48 localhost ceph-mon[287909]: mon.np0005546419@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:55:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:55:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:55:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:55:49 localhost systemd[1]: tmp-crun.JqCJhp.mount: Deactivated successfully. Dec 5 04:55:49 localhost podman[291291]: 2025-12-05 09:55:49.180100284 +0000 UTC m=+0.060324170 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:55:49 localhost podman[291292]: 2025-12-05 09:55:49.238456765 +0000 UTC m=+0.120677401 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 04:55:49 localhost podman[291292]: 2025-12-05 09:55:49.243801855 +0000 UTC m=+0.126022521 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:55:49 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:55:49 localhost podman[291293]: 2025-12-05 09:55:49.259361881 +0000 UTC m=+0.138732671 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 04:55:49 localhost podman[291291]: 2025-12-05 09:55:49.26700316 +0000 UTC m=+0.147227046 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:55:49 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:55:49 localhost podman[291293]: 2025-12-05 09:55:49.297688641 +0000 UTC m=+0.177059441 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:55:49 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:55:49 localhost nova_compute[280228]: 2025-12-05 09:55:49.452 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:49 localhost podman[291419]: Dec 5 04:55:49 localhost podman[291419]: 2025-12-05 09:55:49.821591054 +0000 UTC m=+0.076985480 container create b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_merkle, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, GIT_CLEAN=True) Dec 5 04:55:49 localhost systemd[1]: Started libpod-conmon-b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b.scope. Dec 5 04:55:49 localhost podman[239519]: time="2025-12-05T09:55:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:55:49 localhost systemd[1]: Started libcrun container. Dec 5 04:55:49 localhost podman[291419]: 2025-12-05 09:55:49.787998956 +0000 UTC m=+0.043393412 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:49 localhost podman[291419]: 2025-12-05 09:55:49.893208852 +0000 UTC m=+0.148603268 container init b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_merkle, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:55:49 localhost podman[291419]: 2025-12-05 09:55:49.903793249 +0000 UTC m=+0.159187665 container start b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_merkle, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=) Dec 5 04:55:49 localhost podman[291419]: 2025-12-05 09:55:49.904089959 +0000 UTC m=+0.159484385 container attach b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_merkle, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:55:49 localhost friendly_merkle[291434]: 167 167 Dec 5 04:55:49 localhost systemd[1]: libpod-b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b.scope: Deactivated successfully. Dec 5 04:55:49 localhost podman[239519]: @ - - [05/Dec/2025:09:55:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156071 "" "Go-http-client/1.1" Dec 5 04:55:49 localhost podman[291419]: 2025-12-05 09:55:49.911402067 +0000 UTC m=+0.166796503 container died b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_merkle, build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 5 04:55:49 localhost podman[239519]: @ - - [05/Dec/2025:09:55:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19029 "" "Go-http-client/1.1" Dec 5 04:55:50 localhost podman[291439]: 2025-12-05 09:55:50.060913042 +0000 UTC m=+0.136812254 container remove b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_merkle, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, name=rhceph, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph) Dec 5 04:55:50 localhost systemd[1]: libpod-conmon-b8c0dc42abb8ea03a1520d14b7b038ae41fdc5e61fda99b76c22db974f66a36b.scope: Deactivated successfully. Dec 5 04:55:50 localhost ceph-mon[287909]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:55:50 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:55:50 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:50 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:50 localhost ceph-mon[287909]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:55:50 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:55:50 localhost ceph-mon[287909]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.705218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550705344, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12588, "num_deletes": 773, "total_data_size": 20016754, "memory_usage": 20822560, "flush_reason": "Manual Compaction"} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 5 04:55:50 localhost podman[291509]: Dec 5 04:55:50 localhost podman[291509]: 2025-12-05 09:55:50.770591027 +0000 UTC m=+0.104883828 container create 6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamport, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550771643, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12116777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12593, "table_properties": {"data_size": 12060869, "index_size": 29178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 266075, "raw_average_key_size": 25, "raw_value_size": 11888750, "raw_average_value_size": 1160, "num_data_blocks": 1110, "num_entries": 10245, "num_filter_entries": 10245, "num_deletions": 772, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928483, "oldest_key_time": 1764928483, "file_creation_time": 1764928550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "08658599-4974-4077-8148-d94e3d2f3159", "db_session_id": "N486XW7RA1QDBNFF474Z", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 66523 microseconds, and 27415 cpu microseconds. Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.771736) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12116777 bytes OK Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.771769) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.773952) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.773983) EVENT_LOG_v1 {"time_micros": 1764928550773975, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.774005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19934453, prev total WAL file size 19935202, number of live WAL files 2. Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.777677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353036' seq:72057594037927935, type:22 .. '6D6772737461740033373538' seq:0, type:0; will stop at (end) Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(2012B)] Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550777765, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12118789, "oldest_snapshot_seqno": -1} Dec 5 04:55:50 localhost podman[291509]: 2025-12-05 09:55:50.703716251 +0000 UTC m=+0.038009122 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:50 localhost systemd[1]: Started libpod-conmon-6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0.scope. Dec 5 04:55:50 localhost systemd[1]: Started libcrun container. Dec 5 04:55:50 localhost podman[291509]: 2025-12-05 09:55:50.883558194 +0000 UTC m=+0.217851015 container init 6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamport, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9476 keys, 12104902 bytes, temperature: kUnknown Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550883544, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12104902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12050990, "index_size": 29107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 252664, "raw_average_key_size": 26, "raw_value_size": 11888808, "raw_average_value_size": 1254, "num_data_blocks": 1108, "num_entries": 9476, "num_filter_entries": 9476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928483, "oldest_key_time": 0, "file_creation_time": 1764928550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "08658599-4974-4077-8148-d94e3d2f3159", "db_session_id": "N486XW7RA1QDBNFF474Z", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.883948) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12104902 bytes Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.886345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.5 rd, 114.3 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10250, records dropped: 774 output_compression: NoCompression Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.886379) EVENT_LOG_v1 {"time_micros": 1764928550886365, "job": 4, "event": "compaction_finished", "compaction_time_micros": 105884, "compaction_time_cpu_micros": 23115, "output_level": 6, "num_output_files": 1, "total_output_size": 12104902, "num_input_records": 10250, "num_output_records": 9476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550888295, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928550888428, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 5 04:55:50 localhost ceph-mon[287909]: rocksdb: (Original Log Time 2025/12/05-09:55:50.777594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:55:50 localhost podman[291509]: 2025-12-05 09:55:50.892496532 +0000 UTC m=+0.226789363 container start 6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamport, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Dec 5 04:55:50 localhost podman[291509]: 2025-12-05 09:55:50.892782311 +0000 UTC m=+0.227075322 container attach 6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamport, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 5 04:55:50 localhost lucid_lamport[291524]: 167 167 Dec 5 04:55:50 localhost systemd[1]: libpod-6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0.scope: Deactivated successfully. Dec 5 04:55:50 localhost podman[291509]: 2025-12-05 09:55:50.898910195 +0000 UTC m=+0.233203026 container died 6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamport, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64) Dec 5 04:55:50 localhost podman[291529]: 2025-12-05 09:55:50.986058589 +0000 UTC m=+0.081869636 container remove 6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_lamport, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, RELEASE=main, version=7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:55:50 localhost systemd[1]: libpod-conmon-6e8a194a4bfe8e8beda85d8e7943e011bfc4cc78bb7adecec659d3249650fad0.scope: Deactivated successfully. Dec 5 04:55:51 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:51 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:51 localhost ceph-mon[287909]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:55:51 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:55:51 localhost ceph-mon[287909]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:55:51 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:51 localhost systemd[1]: var-lib-containers-storage-overlay-c8f9cb3c3b8760f8605b3bcef9861b149199c0b4f30c43a6f3a930777f875a9c-merged.mount: Deactivated successfully. Dec 5 04:55:51 localhost podman[291606]: Dec 5 04:55:51 localhost podman[291606]: 2025-12-05 09:55:51.830064392 +0000 UTC m=+0.056757383 container create 48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_chatterjee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:55:51 localhost systemd[1]: Started libpod-conmon-48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d.scope. Dec 5 04:55:51 localhost systemd[1]: Started libcrun container. Dec 5 04:55:51 localhost podman[291606]: 2025-12-05 09:55:51.802739602 +0000 UTC m=+0.029432603 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:51 localhost podman[291606]: 2025-12-05 09:55:51.941755782 +0000 UTC m=+0.168448793 container init 48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_chatterjee, RELEASE=main, release=1763362218, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, distribution-scope=public) Dec 5 04:55:51 localhost dazzling_chatterjee[291621]: 167 167 Dec 5 04:55:51 localhost systemd[1]: libpod-48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d.scope: Deactivated successfully. Dec 5 04:55:51 localhost podman[291606]: 2025-12-05 09:55:51.954346129 +0000 UTC m=+0.181039120 container start 48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_chatterjee, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7) Dec 5 04:55:51 localhost podman[291606]: 2025-12-05 09:55:51.954706691 +0000 UTC m=+0.181399732 container attach 48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_chatterjee, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218) Dec 5 04:55:51 localhost podman[291606]: 2025-12-05 09:55:51.955906226 +0000 UTC m=+0.182599217 container died 48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_chatterjee, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, vcs-type=git, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public) Dec 5 04:55:52 localhost nova_compute[280228]: 2025-12-05 09:55:52.055 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:52 localhost podman[291626]: 2025-12-05 09:55:52.082010768 +0000 UTC m=+0.119551797 container remove 48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_chatterjee, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, version=7, ceph=True, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 5 04:55:52 localhost systemd[1]: libpod-conmon-48f7568873953908019231535c9789997c9cb4be74564389d0985dd10087111d.scope: Deactivated successfully. Dec 5 04:55:52 localhost systemd[1]: var-lib-containers-storage-overlay-2590dca7c5d703ca9ff01a80a50943e72a0a7a83c1b34a07758eb18182739ba6-merged.mount: Deactivated successfully. Dec 5 04:55:52 localhost ceph-mon[287909]: Saving service mon spec with placement label:mon Dec 5 04:55:52 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:52 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:55:52 localhost ceph-mon[287909]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:55:52 localhost ceph-mon[287909]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:55:52 localhost podman[291703]: Dec 5 04:55:52 localhost podman[291703]: 2025-12-05 09:55:52.908747344 +0000 UTC m=+0.072080113 container create 2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mccarthy, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Dec 5 04:55:52 localhost systemd[1]: Started libpod-conmon-2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c.scope. Dec 5 04:55:52 localhost systemd[1]: Started libcrun container. Dec 5 04:55:52 localhost podman[291703]: 2025-12-05 09:55:52.88494771 +0000 UTC m=+0.048280559 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:52 localhost podman[291703]: 2025-12-05 09:55:52.992336801 +0000 UTC m=+0.155669570 container init 2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mccarthy, name=rhceph, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.41.4, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc.) Dec 5 04:55:53 localhost podman[291703]: 2025-12-05 09:55:53.001281839 +0000 UTC m=+0.164614638 container start 2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mccarthy, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.) Dec 5 04:55:53 localhost podman[291703]: 2025-12-05 09:55:53.001514806 +0000 UTC m=+0.164847575 container attach 2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mccarthy, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:55:53 localhost vibrant_mccarthy[291720]: 167 167 Dec 5 04:55:53 localhost systemd[1]: libpod-2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c.scope: Deactivated successfully. Dec 5 04:55:53 localhost podman[291703]: 2025-12-05 09:55:53.006376482 +0000 UTC m=+0.169709301 container died 2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mccarthy, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=7, vcs-type=git, release=1763362218, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Dec 5 04:55:53 localhost podman[291725]: 2025-12-05 09:55:53.100862446 +0000 UTC m=+0.084845386 container remove 2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_mccarthy, release=1763362218, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:55:53 localhost systemd[1]: libpod-conmon-2efe4f15e3700394878730e89e34b2244f772d8d50f3cc621c7b4c0675c4624c.scope: Deactivated successfully. Dec 5 04:55:53 localhost systemd[1]: var-lib-containers-storage-overlay-3843af755884890aa8a719aaa7aef8d6afa2071f9ba5a769773b3f5e183fd5d3-merged.mount: Deactivated successfully. Dec 5 04:55:53 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557fa41fc000 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Dec 5 04:55:53 localhost ceph-mon[287909]: mon.np0005546419@4(peon) e8 removed from monmap, suicide. Dec 5 04:55:53 localhost podman[291750]: 2025-12-05 09:55:53.295995718 +0000 UTC m=+0.046768423 container died 0cfef878df0af3f9a68a1c52bbf36c638e2425750f43b5356e84010e9861b8b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:55:53 localhost systemd[1]: var-lib-containers-storage-overlay-e87d87e094c1b02bd4c71e82556df90df7d735cee042e49444e5ca003ddaf8f0-merged.mount: Deactivated successfully. Dec 5 04:55:53 localhost podman[291750]: 2025-12-05 09:55:53.332982258 +0000 UTC m=+0.083754963 container remove 0cfef878df0af3f9a68a1c52bbf36c638e2425750f43b5356e84010e9861b8b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, ceph=True, io.openshift.expose-services=, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Dec 5 04:55:54 localhost systemd[1]: ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b@mon.np0005546419.service: Deactivated successfully. Dec 5 04:55:54 localhost systemd[1]: Stopped Ceph mon.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 04:55:54 localhost systemd[1]: ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b@mon.np0005546419.service: Consumed 4.154s CPU time. Dec 5 04:55:54 localhost systemd[1]: Reloading. Dec 5 04:55:54 localhost nova_compute[280228]: 2025-12-05 09:55:54.504 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:54 localhost systemd-sysv-generator[291974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:55:54 localhost systemd-rc-local-generator[291970]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:55:54 localhost podman[291984]: Dec 5 04:55:55 localhost podman[291984]: 2025-12-05 09:55:55.023745657 +0000 UTC m=+0.146003180 container create e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_bose, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z) Dec 5 04:55:55 localhost podman[291984]: 2025-12-05 09:55:54.930037677 +0000 UTC m=+0.052295220 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:55:55 localhost systemd[1]: Started libpod-conmon-e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7.scope. Dec 5 04:55:55 localhost systemd[1]: Started libcrun container. Dec 5 04:55:55 localhost podman[291984]: 2025-12-05 09:55:55.182423086 +0000 UTC m=+0.304680599 container init e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_bose, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=1763362218, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, name=rhceph) Dec 5 04:55:55 localhost systemd[1]: tmp-crun.YS6U7A.mount: Deactivated successfully. Dec 5 04:55:55 localhost quirky_bose[291998]: 167 167 Dec 5 04:55:55 localhost systemd[1]: libpod-e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7.scope: Deactivated successfully. Dec 5 04:55:55 localhost podman[291984]: 2025-12-05 09:55:55.26793801 +0000 UTC m=+0.390195523 container start e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_bose, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container) Dec 5 04:55:55 localhost podman[291984]: 2025-12-05 09:55:55.268426165 +0000 UTC m=+0.390683728 container attach e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_bose, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Dec 5 04:55:55 localhost podman[291984]: 2025-12-05 09:55:55.270337472 +0000 UTC m=+0.392595045 container died e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_bose, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph) Dec 5 04:55:55 localhost podman[292003]: 2025-12-05 09:55:55.486542806 +0000 UTC m=+0.268800422 container remove e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_bose, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, version=7, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container) Dec 5 04:55:55 localhost systemd[1]: libpod-conmon-e9b141ae971f607ae43d1bc7a50b77a20761fa9fd468ec18770a6367ef455eb7.scope: Deactivated successfully. Dec 5 04:55:56 localhost systemd[1]: var-lib-containers-storage-overlay-f91f7dde2650133855236efd0aa58b9e4c74d00bf1f3ff216aeb8d21eb32617a-merged.mount: Deactivated successfully. Dec 5 04:55:57 localhost nova_compute[280228]: 2025-12-05 09:55:57.088 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:55:57 localhost openstack_network_exporter[241668]: ERROR 09:55:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:55:57 localhost openstack_network_exporter[241668]: ERROR 09:55:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:55:57 localhost openstack_network_exporter[241668]: ERROR 09:55:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:55:57 localhost openstack_network_exporter[241668]: ERROR 09:55:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:55:57 localhost openstack_network_exporter[241668]: Dec 5 04:55:57 localhost openstack_network_exporter[241668]: ERROR 09:55:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:55:57 localhost openstack_network_exporter[241668]: Dec 5 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:55:58 localhost podman[292019]: 2025-12-05 09:55:58.215641338 +0000 UTC m=+0.094489435 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 04:55:58 localhost podman[292020]: 2025-12-05 09:55:58.297639708 +0000 UTC m=+0.174632470 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:55:58 localhost podman[292019]: 2025-12-05 09:55:58.306329428 +0000 UTC m=+0.185177595 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 5 04:55:58 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:55:58 localhost podman[292020]: 2025-12-05 09:55:58.361903525 +0000 UTC m=+0.238896347 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:55:58 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:55:59 localhost nova_compute[280228]: 2025-12-05 09:55:59.508 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:02 localhost nova_compute[280228]: 2025-12-05 09:56:02.131 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:56:03.902 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:56:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:56:03.902 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:56:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:56:03.905 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:56:04 localhost nova_compute[280228]: 2025-12-05 09:56:04.285 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:04 localhost nova_compute[280228]: 2025-12-05 09:56:04.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:04 localhost nova_compute[280228]: 2025-12-05 09:56:04.548 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.525 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.527 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.527 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:56:05 localhost nova_compute[280228]: 2025-12-05 09:56:05.961 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.048 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.048 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.267 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.269 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11858MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.269 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.270 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.348 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.349 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.349 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.385 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.856 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.862 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.876 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.877 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:56:06 localhost nova_compute[280228]: 2025-12-05 09:56:06.877 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:56:06 localhost podman[292217]: 2025-12-05 09:56:06.979377531 +0000 UTC m=+0.098294389 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1763362218, ceph=True, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git) Dec 5 04:56:07 localhost podman[292217]: 2025-12-05 09:56:07.101241945 +0000 UTC m=+0.220158763 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 5 04:56:07 localhost nova_compute[280228]: 2025-12-05 09:56:07.175 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:07 localhost podman[292398]: Dec 5 04:56:07 localhost podman[292398]: 2025-12-05 09:56:07.775300842 +0000 UTC m=+0.075842145 container create c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, GIT_BRANCH=main, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z) Dec 5 04:56:07 localhost systemd[1]: Started libpod-conmon-c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3.scope. Dec 5 04:56:07 localhost systemd[1]: Started libcrun container. Dec 5 04:56:07 localhost podman[292398]: 2025-12-05 09:56:07.745515419 +0000 UTC m=+0.046056732 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:07 localhost podman[292398]: 2025-12-05 09:56:07.865316422 +0000 UTC m=+0.165857725 container init c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vcs-type=git, release=1763362218) Dec 5 04:56:07 localhost podman[292398]: 2025-12-05 09:56:07.879414285 +0000 UTC m=+0.179955598 container start c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, release=1763362218, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:56:07 localhost nova_compute[280228]: 2025-12-05 09:56:07.878 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:07 localhost nova_compute[280228]: 2025-12-05 09:56:07.880 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:07 localhost podman[292398]: 2025-12-05 09:56:07.881547799 +0000 UTC m=+0.182089162 container attach c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, name=rhceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:56:07 localhost elastic_bassi[292442]: 167 167 Dec 5 04:56:07 localhost systemd[1]: libpod-c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3.scope: Deactivated successfully. Dec 5 04:56:07 localhost podman[292398]: 2025-12-05 09:56:07.88623116 +0000 UTC m=+0.186772483 container died c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, name=rhceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, version=7, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 5 04:56:07 localhost systemd[1]: var-lib-containers-storage-overlay-86ae981e4fb79314f4a7c0b0990311179edcb06ff1a8724f2e6a6725860caedb-merged.mount: Deactivated successfully. Dec 5 04:56:08 localhost podman[292447]: 2025-12-05 09:56:08.004995071 +0000 UTC m=+0.106997040 container remove c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_bassi, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, architecture=x86_64, GIT_BRANCH=main) Dec 5 04:56:08 localhost systemd[1]: libpod-conmon-c7c7d7ca949ebb5eef125e44e301dc2f9ffdbae3a78edcf5f3b22db4fe4020a3.scope: Deactivated successfully. Dec 5 04:56:08 localhost podman[292464]: Dec 5 04:56:08 localhost podman[292464]: 2025-12-05 09:56:08.121239477 +0000 UTC m=+0.081347371 container create e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_cannon, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Dec 5 04:56:08 localhost systemd[1]: Started libpod-conmon-e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c.scope. Dec 5 04:56:08 localhost systemd[1]: Started libcrun container. Dec 5 04:56:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adce86a68d5ea6667bf0049ded9819a7c43592e2013022a8ee9f32f187ca19ee/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adce86a68d5ea6667bf0049ded9819a7c43592e2013022a8ee9f32f187ca19ee/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adce86a68d5ea6667bf0049ded9819a7c43592e2013022a8ee9f32f187ca19ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adce86a68d5ea6667bf0049ded9819a7c43592e2013022a8ee9f32f187ca19ee/merged/var/lib/ceph/mon/ceph-np0005546419 supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:08 localhost podman[292464]: 2025-12-05 09:56:08.186359751 +0000 UTC m=+0.146467645 container init e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_cannon, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:56:08 localhost podman[292464]: 2025-12-05 09:56:08.08897075 +0000 UTC m=+0.049078694 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:08 localhost podman[292464]: 2025-12-05 09:56:08.19403419 +0000 UTC m=+0.154142094 container start e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_cannon, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Dec 5 04:56:08 localhost podman[292464]: 2025-12-05 09:56:08.19433892 +0000 UTC m=+0.154446834 container attach e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_cannon, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True) Dec 5 04:56:08 localhost systemd[1]: libpod-e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c.scope: Deactivated successfully. Dec 5 04:56:08 localhost podman[292464]: 2025-12-05 09:56:08.290199205 +0000 UTC m=+0.250307119 container died e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_cannon, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:56:08 localhost podman[292523]: 2025-12-05 09:56:08.395893264 +0000 UTC m=+0.089704581 container remove e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_cannon, release=1763362218, GIT_BRANCH=main, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux ) Dec 5 04:56:08 localhost systemd[1]: libpod-conmon-e159da1919c80bba003ac1fdb7aaf128f312c5854694a82f6189dfa30d43772c.scope: Deactivated successfully. Dec 5 04:56:08 localhost systemd[1]: Reloading. Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:56:08 localhost systemd-rc-local-generator[292580]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:56:08 localhost systemd-sysv-generator[292584]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:08 localhost systemd[1]: var-lib-containers-storage-overlay-adce86a68d5ea6667bf0049ded9819a7c43592e2013022a8ee9f32f187ca19ee-merged.mount: Deactivated successfully. Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.788 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.789 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.789 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:56:08 localhost nova_compute[280228]: 2025-12-05 09:56:08.789 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:56:08 localhost systemd[1]: Reloading. Dec 5 04:56:08 localhost systemd-rc-local-generator[292651]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 5 04:56:08 localhost systemd-sysv-generator[292654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 5 04:56:09 localhost systemd[1]: Starting Ceph mon.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b... Dec 5 04:56:09 localhost nova_compute[280228]: 2025-12-05 09:56:09.323 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:56:09 localhost nova_compute[280228]: 2025-12-05 09:56:09.341 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:56:09 localhost nova_compute[280228]: 2025-12-05 09:56:09.341 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:56:09 localhost nova_compute[280228]: 2025-12-05 09:56:09.342 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:56:09 localhost nova_compute[280228]: 2025-12-05 09:56:09.342 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:56:09 localhost nova_compute[280228]: 2025-12-05 09:56:09.595 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:09 localhost podman[292770]: Dec 5 04:56:09 localhost podman[292770]: 2025-12-05 09:56:09.627479882 +0000 UTC m=+0.069579757 container create 5de521872e63ff09a129e1d1258ea6bb56db55a028c5d4ce4084e76fc47e7bcd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:56:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e9568d15d596097562430927dc33450a5192401dee29f1e8b67655ee5d740e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e9568d15d596097562430927dc33450a5192401dee29f1e8b67655ee5d740e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e9568d15d596097562430927dc33450a5192401dee29f1e8b67655ee5d740e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e9568d15d596097562430927dc33450a5192401dee29f1e8b67655ee5d740e/merged/var/lib/ceph/mon/ceph-np0005546419 supports timestamps until 2038 (0x7fffffff) Dec 5 04:56:09 localhost podman[292770]: 2025-12-05 09:56:09.684019958 +0000 UTC m=+0.126119843 container init 5de521872e63ff09a129e1d1258ea6bb56db55a028c5d4ce4084e76fc47e7bcd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True) Dec 5 04:56:09 localhost podman[292770]: 2025-12-05 09:56:09.593007069 +0000 UTC m=+0.035106934 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:09 localhost podman[292770]: 2025-12-05 09:56:09.695857873 +0000 UTC m=+0.137957738 container start 5de521872e63ff09a129e1d1258ea6bb56db55a028c5d4ce4084e76fc47e7bcd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mon-np0005546419, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Dec 5 04:56:09 localhost bash[292770]: 5de521872e63ff09a129e1d1258ea6bb56db55a028c5d4ce4084e76fc47e7bcd Dec 5 04:56:09 localhost systemd[1]: Started Ceph mon.np0005546419 for 79feddb1-4bfc-557f-83b9-0d57c9f66c1b. Dec 5 04:56:09 localhost ceph-mon[292820]: set uid:gid to 167:167 (ceph:ceph) Dec 5 04:56:09 localhost ceph-mon[292820]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 5 04:56:09 localhost ceph-mon[292820]: pidfile_write: ignore empty --pid-file Dec 5 04:56:09 localhost ceph-mon[292820]: load: jerasure load: lrc Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: RocksDB version: 7.9.2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Git sha 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: DB SUMMARY Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: DB Session ID: CJN0L4043B6ORZQ3CW6Q Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: CURRENT file: CURRENT Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: IDENTITY file: IDENTITY Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005546419/store.db dir, Total Num: 0, files: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005546419/store.db: 000004.log size: 886 ; Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.error_if_exists: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.create_if_missing: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.paranoid_checks: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.env: 0x5644397fd9e0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.fs: PosixFileSystem Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.info_log: 0x56443b714d20 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_file_opening_threads: 16 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.statistics: (nil) Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.use_fsync: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_log_file_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.log_file_time_to_roll: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.keep_log_file_num: 1000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.recycle_log_file_num: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.allow_fallocate: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.allow_mmap_reads: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.allow_mmap_writes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.use_direct_reads: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.create_missing_column_families: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.db_log_dir: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.wal_dir: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.table_cache_numshardbits: 6 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.advise_random_on_open: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.db_write_buffer_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.write_buffer_manager: 0x56443b725540 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.use_adaptive_mutex: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.rate_limiter: (nil) Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.wal_recovery_mode: 2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.enable_thread_tracking: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.enable_pipelined_write: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.unordered_write: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.row_cache: None Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.wal_filter: None Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.allow_ingest_behind: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.two_write_queues: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.manual_wal_flush: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.wal_compression: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.atomic_flush: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.persist_stats_to_disk: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.log_readahead_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.best_efforts_recovery: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.allow_data_in_errors: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.db_host_id: __hostname__ Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.enforce_single_del_contracts: true Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_background_jobs: 2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_background_compactions: -1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_subcompactions: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.delayed_write_rate : 16777216 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_total_wal_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.stats_dump_period_sec: 600 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.stats_persist_period_sec: 600 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_open_files: -1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bytes_per_sync: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_readahead_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_background_flushes: -1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Compression algorithms supported: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kZSTD supported: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kXpressCompression supported: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kBZip2Compression supported: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kLZ4Compression supported: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kZlibCompression supported: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: #011kSnappyCompression supported: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: DMutex implementation: pthread_mutex_t Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005546419/store.db/MANIFEST-000005 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.merge_operator: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_filter: None Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_filter_factory: None Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.sst_partitioner_factory: None Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.memtable_factory: SkipListFactory Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.table_factory: BlockBasedTable Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56443b714980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x56443b711350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.write_buffer_size: 33554432 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_write_buffer_number: 2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression: NoCompression Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression: Disabled Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.prefix_extractor: nullptr Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.num_levels: 7 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.window_bits: -14 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.level: 32767 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.strategy: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.enabled: false Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.target_file_size_base: 67108864 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.target_file_size_multiplier: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.arena_block_size: 1048576 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.disable_auto_compactions: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.table_properties_collectors: Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.inplace_update_support: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.memtable_huge_page_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.bloom_locality: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.max_successive_merges: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.paranoid_file_checks: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.force_consistency_checks: 1 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.report_bg_io_stats: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.ttl: 2592000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.enable_blob_files: false Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.min_blob_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.blob_file_size: 268435456 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.blob_compression_type: NoCompression Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.enable_blob_garbage_collection: false Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.blob_file_starting_level: 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005546419/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a2adafb0-ab57-42d1-a300-991623c80f9c Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928569737395, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928569739671, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 2012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 898, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 776, "raw_average_value_size": 155, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928569739834, "job": 1, "event": "recovery_finished"} Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56443b738e00 Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: DB pointer 0x56443b82e000 Dec 5 04:56:09 localhost ceph-mon[292820]: mon.np0005546419 does not exist in monmap, will attempt to join an existing cluster Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 04:56:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.96 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.96 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56443b711350#2 capacity: 512.00 MB usage: 1.30 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,1.08 KB,0.000205636%)#012#012** File Read Latency Histogram By Level [default] ** Dec 5 04:56:09 localhost ceph-mon[292820]: using public_addr v2:172.18.0.103:0/0 -> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] Dec 5 04:56:09 localhost ceph-mon[292820]: starting mon.np0005546419 rank -1 at public addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] at bind addrs [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005546419 fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b Dec 5 04:56:09 localhost ceph-mon[292820]: mon.np0005546419@-1(???) e0 preinit fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b Dec 5 04:56:09 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing) e8 sync_obtain_latest_monmap Dec 5 04:56:09 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing) e8 sync_obtain_latest_monmap obtained monmap e8 Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).mds e16 new map Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-05T08:10:30.749420+0000#012modified#0112025-12-05T09:53:37.952087+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01184#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26492}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26492 members: 26492#012[mds.mds.np0005546420.eqhasr{0:26492} state up:active seq 16 addr [v2:172.18.0.107:6808/530338393,v1:172.18.0.107:6809/530338393] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005546419.rweotn{-1:16917} state up:standby seq 1 addr [v2:172.18.0.106:6808/2431590011,v1:172.18.0.106:6809/2431590011] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005546421.tuudjq{-1:26486} state up:standby seq 1 addr [v2:172.18.0.108:6808/812129975,v1:172.18.0.108:6809/812129975] compat {c=[1],r=[1],i=[17ff]}] Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Removed label mon from host np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Removed label mgr from host np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: Removed label _admin from host np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.5 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Removing np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Removing np0005546415.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Removing np0005546415.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Added label _no_schedule to host np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: Removing daemon crash.np0005546415 from np0005546415.localdomain -- ports [] Dec 5 04:56:10 localhost ceph-mon[292820]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth rm", "entity": "client.crash.np0005546415.localdomain"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005546415.localdomain"}]': finished Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Removing key for client.crash.np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Removing daemon mgr.np0005546415.knqtle from np0005546415.localdomain -- ports [9283, 8765] Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain"}]': finished Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth rm", "entity": "mgr.np0005546415.knqtle"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005546415.knqtle"}]': finished Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Removed host np0005546415.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: Removing key for mgr.np0005546415.knqtle Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546416 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mon.np0005546416 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Saving service mon spec with placement label:mon Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: Remove daemons mon.np0005546419 Dec 5 04:56:10 localhost ceph-mon[292820]: Safe to remove mon.np0005546419: new quorum should be ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420'] (from ['np0005546418', 'np0005546416', 'np0005546421', 'np0005546420']) Dec 5 04:56:10 localhost ceph-mon[292820]: Removing monitor np0005546419 from monmap... Dec 5 04:56:10 localhost ceph-mon[292820]: Removing daemon mon.np0005546419 from np0005546419.localdomain -- ports [] Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546416 calling monitor election Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421,np0005546420 in quorum (ranks 0,1,2,3) Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring osd.5 (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:56:10 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Deploying daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:10 localhost ceph-mon[292820]: mon.np0005546419@-1(synchronizing).paxosservice(auth 1..37) refresh upgraded, format 0 -> 3 Dec 5 04:56:10 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x557fa41fc420 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Dec 5 04:56:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:56:12 localhost nova_compute[280228]: 2025-12-05 09:56:12.204 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:12 localhost systemd[1]: tmp-crun.nZ0KjY.mount: Deactivated successfully. Dec 5 04:56:12 localhost podman[293057]: 2025-12-05 09:56:12.221101878 +0000 UTC m=+0.100662090 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Dec 5 04:56:12 localhost podman[293057]: 2025-12-05 09:56:12.234298044 +0000 UTC m=+0.113858246 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git) Dec 5 04:56:12 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:56:12 localhost ceph-mon[292820]: mon.np0005546419@-1(probing) e9 my rank is now 4 (was -1) Dec 5 04:56:12 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:56:12 localhost ceph-mon[292820]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Dec 5 04:56:12 localhost ceph-mon[292820]: mon.np0005546419@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.947 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.949 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.961 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.962 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a65173f-3c46-445c-9ab1-2fa2ad948149', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:12.949387', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a1f56498-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.123712185, 'message_signature': 'a9cfd987ac3f8fd9ceb68a4582a2743f3108c26f6590575b5b6828efceec38ef'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:12.949387', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a1f579a6-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.123712185, 'message_signature': '6bd7c33113d0484892e6432546037805c16953d0c073b793642cd2f010d3d2fd'}]}, 'timestamp': '2025-12-05 09:56:12.963317', '_unique_id': '62525782b3c54779b087bb5d71a87fa5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.964 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.966 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.970 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b21edc9e-c9c5-41aa-9fa1-f3f31d42adc2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:12.966460', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a1f6a2a4-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': 'f003382889045fef1e9661300a7b67b66744b5865c608f87ec8ea718c6bb71b9'}]}, 'timestamp': '2025-12-05 09:56:12.970933', '_unique_id': '8152d0bcc1ac47bc8611b6e8502345b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.971 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:12.973 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.001 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.002 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8f1d5433-17e0-4d99-959f-8c155fe69b87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:12.973417', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a1fb669a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': 'd18a67b89d9f7fb06d156dc71e8de3fef44314a45f63d2dbcc921dee6e66263e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:12.973417', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a1fb7b08-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '7d33b81e5b4125917418a430d323d2e77aa902ea04533d8176bc55560c706eb4'}]}, 'timestamp': '2025-12-05 09:56:13.002631', '_unique_id': 'f5e7cca99d764225a6959c4ef6180443'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.003 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.004 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.005 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.005 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.005 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.005 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '98269562-94d0-4544-b51c-1b304901c13b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.005329', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a1fbf5e2-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.123712185, 'message_signature': '0b92cc52d22d5eee78b1415e06ce28b538603fc04a934404885772aaf0a77c95'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.005329', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a1fc060e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.123712185, 'message_signature': '95602234e11b7aa42bacfd6b159fff700cf8c8b53642f21f025d11b8ac61cbe2'}]}, 'timestamp': '2025-12-05 09:56:13.006306', '_unique_id': 'f6d897306b3b407eb22fde97fcb7b6a7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.008 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.028 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '02fd4962-3613-40d9-a016-2c2702c10340', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:56:13.008483', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'a1ff868a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.202956812, 'message_signature': '030d2017300c779b232111bf2e3e886db009f7f5de93a58847e5aa6fed31a57f'}]}, 'timestamp': '2025-12-05 09:56:13.029046', '_unique_id': 'db56dc6b193d49e9abfb3ae7a42586f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'deb4f49b-7d36-4738-b66a-f57c1c3bf794', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.030190', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a1ffc190-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': 'bed54e8f664c744f26b9bafc3723c7d257e6ad02be7004f56a6ac15a967b7a96'}]}, 'timestamp': '2025-12-05 09:56:13.030548', '_unique_id': '3f92f191040f40709a5535be0feef56e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.031 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.031 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6de6d37-14e5-4dbc-87e9-bd922ddc7f09', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.031711', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a1fff75a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': 'e256246a7c11bf8b3adb3b1bbd34bc1c5814ad967d98e354f2385ed13b4cc96f'}]}, 'timestamp': '2025-12-05 09:56:13.031923', '_unique_id': 'c8f43dd1838d450ba050de35c854fa2d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4c9224dc-8bee-4dc8-8d57-e59a11f1554d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.032879', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a20024dc-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': 'c9077556f3ce769879c4b0f13ba7076c90d95dbcec6888c6b4122f20744e001d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.032879', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a2002e6e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': 'b973af50bc2cfefed3e26b58b1ef372434a6678137d1b2fee7ded3719ececfd9'}]}, 'timestamp': '2025-12-05 09:56:13.033335', '_unique_id': 'f362b03258eb431ebd1e128d4986e6c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 12400000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd3751157-8f25-4253-8cb4-1d4902149ef7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12400000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:56:13.034323', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'a2005d58-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.202956812, 'message_signature': '9d5604e726119d0fb1f9c4ed0de009ec33e5eae8e5e165106b2684b6cb20ef8b'}]}, 'timestamp': '2025-12-05 09:56:13.034527', '_unique_id': '9ee08e658c9045f1a79d95c14b1328c9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.035 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fec3f6bc-be1b-4088-b876-7041a5c89055', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.035467', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a20089e0-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '44833cda40dba6a87cc4fea941dc21074ea657a1d91751a93bca4c2b80cc4fee'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.035467', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a200912e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '5ae8bd43d380829670b825f45ee1cba79b0fc7904812e4ca27b7daa35cc5f91f'}]}, 'timestamp': '2025-12-05 09:56:13.035846', '_unique_id': '6d02b3ac522240c9b7512ca6a4d93da3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '22aeb306-6783-4246-b572-72d552041f73', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.036828', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a200bf1e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': 'a541c35242d673157d53ba01efe5befdc0a7ab4a33ce55fc27f5ca3f6345d917'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.036828', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a200c676-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '72782dab3bdbb96682499f257e3bf93263cb951fb32bdd922d32e60042c839a7'}]}, 'timestamp': '2025-12-05 09:56:13.037210', '_unique_id': '3f9335646ea848978a23a9c0a90bf2e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8f3d2d13-bd89-47ff-9166-aa63a023fe07', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.038168', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a200f4f2-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': '56333ce0691cf0afc8dcda4b81ef63ee7d799c6f48d44a6a0005c01be21787ea'}]}, 'timestamp': '2025-12-05 09:56:13.038414', '_unique_id': 'adc9a16c624542efa43158b2abb922b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95f1d1c8-4152-47bc-8309-fefb275f2fad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.039358', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a20121fc-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': 'd67ed05297c71b40b0635dcd332747bb9a6cb5f977af7d912fe6c0c84220fc41'}]}, 'timestamp': '2025-12-05 09:56:13.039567', '_unique_id': '0855910280e34227bfa58226585d2024'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.040 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.040 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d02e233-752b-481b-b508-b1daf9b257f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.040509', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a2014f06-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': '1393eb27a5f16c2f3e4f24019e3056d6fc91a775c8a172ffa70d8a6eb9c5b804'}]}, 'timestamp': '2025-12-05 09:56:13.040721', '_unique_id': 'd164915e308244df90e5cb637b3927bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8c66ff49-07cc-4bad-b03e-fd87526454e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.041870', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a2018408-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.123712185, 'message_signature': 'cb6b60dd5a60f67fbc672ac90e741aa17328823e2f2847b83b3949282f845883'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.041870', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a2018b6a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.123712185, 'message_signature': '5600b3348fb17b6f8f5a05a2c6f24fd1463909c9954a04b8c8211cdf11e2da3f'}]}, 'timestamp': '2025-12-05 09:56:13.042286', '_unique_id': '5b92fa1ea60b4b08aa357c4f9862c3d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac86ac8d-4fe6-446f-83a9-aeb6d39665c1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.043231', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a201ba40-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': '55e326662b072c7e054f0da1d03f1d70c8504bc5f0f5e6c54d42177e253efe50'}]}, 'timestamp': '2025-12-05 09:56:13.043465', '_unique_id': 'b7c8b914a41c4eca9c1a486bfb23138f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.044 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3225097e-7033-46a7-bcea-30a93e6e11c7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.044413', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a201e75e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '0e3e44c0c41d87029f8dd719f7140a96c8ced1dddfba0905e58a2b3c5aafedb7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.044413', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a201eea2-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': 'bf374334725207458241d64da38c19a8d910320620291f6eb600d5ce164936c8'}]}, 'timestamp': '2025-12-05 09:56:13.044793', '_unique_id': 'ece9be0376714c749f570195a28db158'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cdcea83f-034d-4281-95c5-f35c339c58da', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.045747', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a2021b8e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': 'b4b01e4e94e32d36802196d3afccdf325039f627d6109e4e672d7cb6221d9194'}]}, 'timestamp': '2025-12-05 09:56:13.045955', '_unique_id': '3c1f3e80ebd0480bbe70a628a7e48499'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.046 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c8326518-f964-4448-83b6-8d17a5d112e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.046905', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a20248b6-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': '476c3a35040d209487be9ee8c27cd2f57f01085dc6845ded02a64d9795ddfd96'}]}, 'timestamp': '2025-12-05 09:56:13.047112', '_unique_id': 'f534186650f2482f8e215afd941c512d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c71fc6d-b419-4d85-bd88-3d179d0e7369', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:56:13.048043', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'a202753e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.140775277, 'message_signature': '2f882da6a5f5d47ef48fee22c5bfa0f071c978450ea52086b0823ea89a27c8e7'}]}, 'timestamp': '2025-12-05 09:56:13.048270', '_unique_id': '9ad1fffcfbb94d61b17236b15fda1f58'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.049 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.049 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.049 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.049 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b7536fab-623c-4c36-b6bd-2ba76fce05bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:56:13.049278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a202a568-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '9e914b218582fffa25a7ffc0cdea3dedf8ca5e1101d64553c524ad7b6f7788d0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:56:13.049278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a202acca-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11688.147718925, 'message_signature': '899cb1dc100dbe898a7947ca2a237bdc331f95cb51362963f9766fe6f5343d52'}]}, 'timestamp': '2025-12-05 09:56:13.049660', '_unique_id': 'b696e904bf274e0e85514c7fd2cf3c5b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:56:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:56:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 04:56:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:56:14 localhost podman[293077]: 2025-12-05 09:56:14.195304508 +0000 UTC m=+0.080238177 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd) Dec 5 04:56:14 localhost podman[293077]: 2025-12-05 09:56:14.2307101 +0000 UTC m=+0.115643799 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd) Dec 5 04:56:14 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:56:14 localhost nova_compute[280228]: 2025-12-05 09:56:14.597 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419@4(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546416 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421 in quorum (ranks 0,1,2) Dec 5 04:56:16 localhost ceph-mon[292820]: Health check failed: 2/5 mons down, quorum np0005546418,np0005546416,np0005546421 (MON_DOWN) Dec 5 04:56:16 localhost ceph-mon[292820]: Health detail: HEALTH_WARN 2/5 mons down, quorum np0005546418,np0005546416,np0005546421 Dec 5 04:56:16 localhost ceph-mon[292820]: [WRN] MON_DOWN: 2/5 mons down, quorum np0005546418,np0005546416,np0005546421 Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546420 (rank 3) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum) Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Dec 5 04:56:16 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:16 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:16 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:56:16 localhost ceph-mon[292820]: mgrc update_daemon_metadata mon.np0005546419 metadata {addrs=[v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005546419.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005546419.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546416 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:56:16 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546416,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3,4) Dec 5 04:56:16 localhost ceph-mon[292820]: Health check cleared: MON_DOWN (was: 2/5 mons down, quorum np0005546418,np0005546416,np0005546421) Dec 5 04:56:16 localhost ceph-mon[292820]: Cluster is now healthy Dec 5 04:56:16 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:56:17 localhost nova_compute[280228]: 2025-12-05 09:56:17.250 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:17 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546416.kmqcnq (monmap changed)... Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546416.kmqcnq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:17 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546416.kmqcnq on np0005546416.localdomain Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:17 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:18 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:56:18 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:56:18 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:19 localhost nova_compute[280228]: 2025-12-05 09:56:19.636 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:19 localhost podman[239519]: time="2025-12-05T09:56:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:56:19 localhost podman[239519]: @ - - [05/Dec/2025:09:56:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:56:19 localhost podman[239519]: @ - - [05/Dec/2025:09:56:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18700 "" "Go-http-client/1.1" Dec 5 04:56:19 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:19 localhost ceph-mon[292820]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:56:19 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:19 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:56:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:56:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:56:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:56:20 localhost podman[293132]: 2025-12-05 09:56:20.168115143 +0000 UTC m=+0.095035532 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:56:20 localhost podman[293132]: 2025-12-05 09:56:20.180617717 +0000 UTC m=+0.107538146 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:56:20 localhost podman[293134]: 2025-12-05 09:56:20.22137553 +0000 UTC m=+0.141753853 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 04:56:20 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:56:20 localhost podman[293134]: 2025-12-05 09:56:20.256696309 +0000 UTC m=+0.177074642 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 04:56:20 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:56:20 localhost podman[293133]: 2025-12-05 09:56:20.328630847 +0000 UTC m=+0.251405552 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Dec 5 04:56:20 localhost podman[293133]: 2025-12-05 09:56:20.36174795 +0000 UTC m=+0.284522625 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent) Dec 5 04:56:20 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:56:20 localhost podman[293228]: Dec 5 04:56:20 localhost podman[293228]: 2025-12-05 09:56:20.613171381 +0000 UTC m=+0.076486596 container create e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_hugle, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:56:20 localhost systemd[1]: Started libpod-conmon-e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827.scope. Dec 5 04:56:20 localhost systemd[1]: Started libcrun container. Dec 5 04:56:20 localhost podman[293228]: 2025-12-05 09:56:20.677208271 +0000 UTC m=+0.140523476 container init e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_hugle, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True, io.buildah.version=1.41.4, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:56:20 localhost podman[293228]: 2025-12-05 09:56:20.582942454 +0000 UTC m=+0.046257689 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:20 localhost podman[293228]: 2025-12-05 09:56:20.688341285 +0000 UTC m=+0.151656490 container start e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_hugle, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:56:20 localhost podman[293228]: 2025-12-05 09:56:20.688701826 +0000 UTC m=+0.152017091 container attach e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_hugle, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-type=git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:56:20 localhost funny_hugle[293243]: 167 167 Dec 5 04:56:20 localhost systemd[1]: libpod-e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827.scope: Deactivated successfully. Dec 5 04:56:20 localhost podman[293228]: 2025-12-05 09:56:20.692951053 +0000 UTC m=+0.156266268 container died e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_hugle, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 5 04:56:20 localhost podman[293248]: 2025-12-05 09:56:20.793625073 +0000 UTC m=+0.088197896 container remove e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_hugle, GIT_BRANCH=main, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Dec 5 04:56:20 localhost systemd[1]: libpod-conmon-e2087b36f4d1bf7c42318ec07284ae1c0968df5eb552492a6dbd89dc1544f827.scope: Deactivated successfully. Dec 5 04:56:20 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:20 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:20 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:56:20 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:56:20 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:56:20 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:20 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:20 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:56:21 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 5 04:56:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2377674985' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 5 04:56:21 localhost podman[293317]: Dec 5 04:56:21 localhost podman[293317]: 2025-12-05 09:56:21.529158253 +0000 UTC m=+0.062642040 container create b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_noyce, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:56:21 localhost systemd[1]: Started libpod-conmon-b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33.scope. Dec 5 04:56:21 localhost systemd[1]: Started libcrun container. Dec 5 04:56:21 localhost podman[293317]: 2025-12-05 09:56:21.595561254 +0000 UTC m=+0.129045031 container init b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_noyce, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=) Dec 5 04:56:21 localhost podman[293317]: 2025-12-05 09:56:21.497485262 +0000 UTC m=+0.030969109 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:21 localhost great_noyce[293331]: 167 167 Dec 5 04:56:21 localhost systemd[1]: libpod-b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33.scope: Deactivated successfully. Dec 5 04:56:21 localhost podman[293317]: 2025-12-05 09:56:21.605522663 +0000 UTC m=+0.139006490 container start b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_noyce, release=1763362218, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, ceph=True, version=7, build-date=2025-11-26T19:44:28Z) Dec 5 04:56:21 localhost podman[293317]: 2025-12-05 09:56:21.606370668 +0000 UTC m=+0.139854515 container attach b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_noyce, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:56:21 localhost podman[293317]: 2025-12-05 09:56:21.609201933 +0000 UTC m=+0.142685740 container died b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_noyce, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Dec 5 04:56:21 localhost podman[293336]: 2025-12-05 09:56:21.714302396 +0000 UTC m=+0.093604089 container remove b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_noyce, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:56:21 localhost systemd[1]: libpod-conmon-b1edd4ecd29fca7ae1d828bd9a9a272ac1455c2fd597efbf8e3e3bc991edde33.scope: Deactivated successfully. Dec 5 04:56:21 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:56:21 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:56:21 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:21 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:21 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:56:22 localhost systemd[1]: var-lib-containers-storage-overlay-959edf67610b11e4b992a23a1bc5079c999ba81e9dce73e2a028e409123bba04-merged.mount: Deactivated successfully. Dec 5 04:56:22 localhost nova_compute[280228]: 2025-12-05 09:56:22.285 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:22 localhost podman[293414]: Dec 5 04:56:22 localhost podman[293414]: 2025-12-05 09:56:22.568585348 +0000 UTC m=+0.081003751 container create b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_feynman, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 5 04:56:22 localhost systemd[1]: Started libpod-conmon-b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835.scope. Dec 5 04:56:22 localhost podman[293414]: 2025-12-05 09:56:22.532216627 +0000 UTC m=+0.044635050 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:22 localhost systemd[1]: Started libcrun container. Dec 5 04:56:22 localhost podman[293414]: 2025-12-05 09:56:22.655140804 +0000 UTC m=+0.167559207 container init b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_feynman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, vcs-type=git, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:56:22 localhost podman[293414]: 2025-12-05 09:56:22.666395231 +0000 UTC m=+0.178813624 container start b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_feynman, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.tags=rhceph ceph) Dec 5 04:56:22 localhost podman[293414]: 2025-12-05 09:56:22.666665629 +0000 UTC m=+0.179084102 container attach b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_feynman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1763362218, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Dec 5 04:56:22 localhost gifted_feynman[293430]: 167 167 Dec 5 04:56:22 localhost systemd[1]: libpod-b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835.scope: Deactivated successfully. Dec 5 04:56:22 localhost podman[293414]: 2025-12-05 09:56:22.669903816 +0000 UTC m=+0.182322239 container died b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_feynman, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=1763362218, architecture=x86_64, GIT_BRANCH=main) Dec 5 04:56:22 localhost podman[293435]: 2025-12-05 09:56:22.774571555 +0000 UTC m=+0.090749642 container remove b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_feynman, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4) Dec 5 04:56:22 localhost systemd[1]: libpod-conmon-b409ec5ab8137d6d741ea7b46bbde765122bbcea95ec5e6e228e8791eb406835.scope: Deactivated successfully. Dec 5 04:56:23 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:56:23 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:23 localhost systemd[1]: var-lib-containers-storage-overlay-46c0f2b2792896f86ff808e44f83de7829f3388b0e27f454ac497c42728f1325-merged.mount: Deactivated successfully. Dec 5 04:56:23 localhost podman[293511]: Dec 5 04:56:23 localhost podman[293511]: 2025-12-05 09:56:23.678474785 +0000 UTC m=+0.076988660 container create 2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_saha, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:56:23 localhost systemd[1]: Started libpod-conmon-2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6.scope. Dec 5 04:56:23 localhost systemd[1]: Started libcrun container. Dec 5 04:56:23 localhost podman[293511]: 2025-12-05 09:56:23.646797745 +0000 UTC m=+0.045311650 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:23 localhost podman[293511]: 2025-12-05 09:56:23.754353011 +0000 UTC m=+0.152866896 container init 2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_saha, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:56:23 localhost podman[293511]: 2025-12-05 09:56:23.766495815 +0000 UTC m=+0.165009690 container start 2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_saha, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z) Dec 5 04:56:23 localhost elegant_saha[293526]: 167 167 Dec 5 04:56:23 localhost systemd[1]: libpod-2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6.scope: Deactivated successfully. Dec 5 04:56:23 localhost podman[293511]: 2025-12-05 09:56:23.76730838 +0000 UTC m=+0.165822265 container attach 2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_saha, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, maintainer=Guillaume Abrioux , architecture=x86_64) Dec 5 04:56:23 localhost podman[293511]: 2025-12-05 09:56:23.773417933 +0000 UTC m=+0.171931848 container died 2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_saha, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 5 04:56:23 localhost podman[293531]: 2025-12-05 09:56:23.863429292 +0000 UTC m=+0.083527146 container remove 2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_saha, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public) Dec 5 04:56:23 localhost systemd[1]: libpod-conmon-2dfa35c9d6f9ed314366c4e7970f62398f52351847961cdbaee866a070c0d0a6.scope: Deactivated successfully. Dec 5 04:56:24 localhost ceph-mon[292820]: Reconfig service osd.default_drive_group Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:24 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:56:24 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' Dec 5 04:56:24 localhost ceph-mon[292820]: from='mgr.14190 172.18.0.105:0/3681907320' entity='mgr.np0005546418.garyvl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:56:24 localhost systemd[1]: var-lib-containers-storage-overlay-d3359e1df2de6a0f16df9d77d87ceed234679bc93d38f332debe15297b71b7a1-merged.mount: Deactivated successfully. Dec 5 04:56:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 5 04:56:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 5 04:56:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 e86: 6 total, 6 up, 6 in Dec 5 04:56:24 localhost systemd-logind[760]: Session 64 logged out. Waiting for processes to exit. Dec 5 04:56:24 localhost podman[293602]: Dec 5 04:56:24 localhost podman[293602]: 2025-12-05 09:56:24.572743616 +0000 UTC m=+0.074629320 container create 30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True) Dec 5 04:56:24 localhost systemd[1]: Started libpod-conmon-30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1.scope. Dec 5 04:56:24 localhost systemd[1]: Started libcrun container. Dec 5 04:56:24 localhost podman[293602]: 2025-12-05 09:56:24.640953711 +0000 UTC m=+0.142839445 container init 30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, RELEASE=main, ceph=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 5 04:56:24 localhost podman[293602]: 2025-12-05 09:56:24.542167459 +0000 UTC m=+0.044053163 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:56:24 localhost nova_compute[280228]: 2025-12-05 09:56:24.679 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:24 localhost podman[293602]: 2025-12-05 09:56:24.683347334 +0000 UTC m=+0.185233048 container start 30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.41.4, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:56:24 localhost podman[293602]: 2025-12-05 09:56:24.683653183 +0000 UTC m=+0.185538887 container attach 30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, RELEASE=main, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 5 04:56:24 localhost condescending_khayyam[293618]: 167 167 Dec 5 04:56:24 localhost systemd[1]: libpod-30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1.scope: Deactivated successfully. Dec 5 04:56:24 localhost podman[293602]: 2025-12-05 09:56:24.696871038 +0000 UTC m=+0.198756772 container died 30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container) Dec 5 04:56:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1019537981 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:24 localhost podman[293623]: 2025-12-05 09:56:24.798979631 +0000 UTC m=+0.092969759 container remove 30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, name=rhceph) Dec 5 04:56:24 localhost systemd[1]: libpod-conmon-30f71ebc851a12f0c1814cc3fff1d8d499db8be3941db7a6a5c5432e116dbda1.scope: Deactivated successfully. Dec 5 04:56:24 localhost systemd[1]: session-64.scope: Deactivated successfully. Dec 5 04:56:24 localhost systemd[1]: session-64.scope: Consumed 26.346s CPU time. Dec 5 04:56:24 localhost systemd-logind[760]: Removed session 64. Dec 5 04:56:25 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:56:25 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:56:25 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/431698000' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 04:56:25 localhost ceph-mon[292820]: Activating manager daemon np0005546415.knqtle Dec 5 04:56:25 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/431698000' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 04:56:25 localhost systemd[1]: var-lib-containers-storage-overlay-fad9ee72482dc0f9fb835cac0494851c9a7fedfb36fdffbc33496fa83b7f150f-merged.mount: Deactivated successfully. Dec 5 04:56:27 localhost openstack_network_exporter[241668]: ERROR 09:56:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:56:27 localhost openstack_network_exporter[241668]: ERROR 09:56:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:56:27 localhost openstack_network_exporter[241668]: ERROR 09:56:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:56:27 localhost openstack_network_exporter[241668]: ERROR 09:56:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:56:27 localhost openstack_network_exporter[241668]: Dec 5 04:56:27 localhost openstack_network_exporter[241668]: ERROR 09:56:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:56:27 localhost openstack_network_exporter[241668]: Dec 5 04:56:27 localhost nova_compute[280228]: 2025-12-05 09:56:27.287 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:56:29 localhost podman[293641]: 2025-12-05 09:56:29.208846162 +0000 UTC m=+0.084908958 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:56:29 localhost podman[293641]: 2025-12-05 09:56:29.218105669 +0000 UTC m=+0.094168465 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:56:29 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:56:29 localhost systemd[1]: tmp-crun.1KqGsp.mount: Deactivated successfully. Dec 5 04:56:29 localhost podman[293640]: 2025-12-05 09:56:29.310450469 +0000 UTC m=+0.186712990 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 04:56:29 localhost podman[293640]: 2025-12-05 09:56:29.350110398 +0000 UTC m=+0.226373289 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:56:29 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:56:29 localhost nova_compute[280228]: 2025-12-05 09:56:29.719 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:29 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1020041857 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:32 localhost nova_compute[280228]: 2025-12-05 09:56:32.331 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:34 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054410 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:34 localhost nova_compute[280228]: 2025-12-05 09:56:34.763 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:35 localhost systemd[1]: Stopping User Manager for UID 1002... Dec 5 04:56:35 localhost systemd[26052]: Activating special unit Exit the Session... Dec 5 04:56:35 localhost systemd[26052]: Removed slice User Background Tasks Slice. Dec 5 04:56:35 localhost systemd[26052]: Stopped target Main User Target. Dec 5 04:56:35 localhost systemd[26052]: Stopped target Basic System. Dec 5 04:56:35 localhost systemd[26052]: Stopped target Paths. Dec 5 04:56:35 localhost systemd[26052]: Stopped target Sockets. Dec 5 04:56:35 localhost systemd[26052]: Stopped target Timers. Dec 5 04:56:35 localhost systemd[26052]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 5 04:56:35 localhost systemd[26052]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 04:56:35 localhost systemd[26052]: Closed D-Bus User Message Bus Socket. Dec 5 04:56:35 localhost systemd[26052]: Stopped Create User's Volatile Files and Directories. Dec 5 04:56:35 localhost systemd[26052]: Removed slice User Application Slice. Dec 5 04:56:35 localhost systemd[26052]: Reached target Shutdown. Dec 5 04:56:35 localhost systemd[26052]: Finished Exit the Session. Dec 5 04:56:35 localhost systemd[26052]: Reached target Exit the Session. Dec 5 04:56:35 localhost systemd[1]: user@1002.service: Deactivated successfully. Dec 5 04:56:35 localhost systemd[1]: Stopped User Manager for UID 1002. Dec 5 04:56:35 localhost systemd[1]: user@1002.service: Consumed 13.520s CPU time, read 0B from disk, written 7.0K to disk. Dec 5 04:56:35 localhost systemd[1]: Stopping User Runtime Directory /run/user/1002... Dec 5 04:56:35 localhost systemd[1]: run-user-1002.mount: Deactivated successfully. Dec 5 04:56:35 localhost systemd[1]: user-runtime-dir@1002.service: Deactivated successfully. Dec 5 04:56:35 localhost systemd[1]: Stopped User Runtime Directory /run/user/1002. Dec 5 04:56:35 localhost systemd[1]: Removed slice User Slice of UID 1002. Dec 5 04:56:35 localhost systemd[1]: user-1002.slice: Consumed 4min 8.142s CPU time. Dec 5 04:56:37 localhost nova_compute[280228]: 2025-12-05 09:56:37.376 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:39 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054723 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:39 localhost nova_compute[280228]: 2025-12-05 09:56:39.792 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:42 localhost nova_compute[280228]: 2025-12-05 09:56:42.418 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:56:43 localhost systemd[1]: tmp-crun.OXK4fG.mount: Deactivated successfully. Dec 5 04:56:43 localhost podman[293689]: 2025-12-05 09:56:43.223836659 +0000 UTC m=+0.109779284 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64) Dec 5 04:56:43 localhost podman[293689]: 2025-12-05 09:56:43.23358229 +0000 UTC m=+0.119524865 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter) Dec 5 04:56:43 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:56:44 localhost systemd[1]: tmp-crun.IbdgIW.mount: Deactivated successfully. Dec 5 04:56:44 localhost podman[293708]: 2025-12-05 09:56:44.395458327 +0000 UTC m=+0.080739613 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 5 04:56:44 localhost podman[293708]: 2025-12-05 09:56:44.434674143 +0000 UTC m=+0.119955399 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Dec 5 04:56:44 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:56:44 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:44 localhost nova_compute[280228]: 2025-12-05 09:56:44.837 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:47 localhost nova_compute[280228]: 2025-12-05 09:56:47.464 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:49 localhost systemd[1]: session-65.scope: Deactivated successfully. Dec 5 04:56:49 localhost systemd[1]: session-65.scope: Consumed 1.711s CPU time. Dec 5 04:56:49 localhost systemd-logind[760]: Session 65 logged out. Waiting for processes to exit. Dec 5 04:56:49 localhost systemd-logind[760]: Removed session 65. Dec 5 04:56:49 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:49 localhost nova_compute[280228]: 2025-12-05 09:56:49.838 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:49 localhost podman[239519]: time="2025-12-05T09:56:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:56:49 localhost podman[239519]: @ - - [05/Dec/2025:09:56:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:56:49 localhost podman[239519]: @ - - [05/Dec/2025:09:56:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18704 "" "Go-http-client/1.1" Dec 5 04:56:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:56:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:56:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:56:51 localhost systemd[1]: tmp-crun.PQ8Fvg.mount: Deactivated successfully. Dec 5 04:56:51 localhost podman[293728]: 2025-12-05 09:56:51.19763235 +0000 UTC m=+0.086196916 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:56:51 localhost podman[293728]: 2025-12-05 09:56:51.207343912 +0000 UTC m=+0.095908568 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:56:51 localhost podman[293729]: 2025-12-05 09:56:51.252811585 +0000 UTC m=+0.135231396 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 5 04:56:51 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:56:51 localhost podman[293729]: 2025-12-05 09:56:51.288891948 +0000 UTC m=+0.171311749 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 04:56:51 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:56:51 localhost podman[293730]: 2025-12-05 09:56:51.359586368 +0000 UTC m=+0.238171825 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible) Dec 5 04:56:51 localhost podman[293730]: 2025-12-05 09:56:51.372820555 +0000 UTC m=+0.251406052 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:56:51 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:56:52 localhost nova_compute[280228]: 2025-12-05 09:56:52.511 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:54 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:54 localhost nova_compute[280228]: 2025-12-05 09:56:54.871 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:55 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e87 e87: 6 total, 6 up, 6 in Dec 5 04:56:55 localhost ceph-mon[292820]: Activating manager daemon np0005546416.kmqcnq Dec 5 04:56:55 localhost ceph-mon[292820]: Manager daemon np0005546415.knqtle is unresponsive, replacing it with standby daemon np0005546416.kmqcnq Dec 5 04:56:56 localhost sshd[293785]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:56:56 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 5 04:56:56 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 5 04:56:56 localhost systemd-logind[760]: New session 67 of user ceph-admin. Dec 5 04:56:56 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 5 04:56:56 localhost systemd[1]: Starting User Manager for UID 1002... Dec 5 04:56:56 localhost systemd[293789]: Queued start job for default target Main User Target. Dec 5 04:56:56 localhost systemd[293789]: Created slice User Application Slice. Dec 5 04:56:56 localhost systemd[293789]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 04:56:56 localhost systemd[293789]: Started Daily Cleanup of User's Temporary Directories. Dec 5 04:56:56 localhost systemd[293789]: Reached target Paths. Dec 5 04:56:56 localhost systemd[293789]: Reached target Timers. Dec 5 04:56:56 localhost systemd[293789]: Starting D-Bus User Message Bus Socket... Dec 5 04:56:56 localhost systemd[293789]: Starting Create User's Volatile Files and Directories... Dec 5 04:56:56 localhost systemd[293789]: Listening on D-Bus User Message Bus Socket. Dec 5 04:56:56 localhost systemd[293789]: Reached target Sockets. Dec 5 04:56:56 localhost systemd[293789]: Finished Create User's Volatile Files and Directories. Dec 5 04:56:56 localhost systemd[293789]: Reached target Basic System. Dec 5 04:56:56 localhost systemd[293789]: Reached target Main User Target. Dec 5 04:56:56 localhost systemd[293789]: Startup finished in 133ms. Dec 5 04:56:56 localhost systemd[1]: Started User Manager for UID 1002. Dec 5 04:56:56 localhost systemd[1]: Started Session 67 of User ceph-admin. Dec 5 04:56:56 localhost ceph-mon[292820]: Manager daemon np0005546416.kmqcnq is now available Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: removing stray HostCache host record np0005546415.localdomain.devices.0 Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"}]': finished Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546415.localdomain.devices.0"}]': finished Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/mirror_snapshot_schedule"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/mirror_snapshot_schedule"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/trash_purge_schedule"} : dispatch Dec 5 04:56:56 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546416.kmqcnq/trash_purge_schedule"} : dispatch Dec 5 04:56:57 localhost openstack_network_exporter[241668]: ERROR 09:56:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:56:57 localhost openstack_network_exporter[241668]: ERROR 09:56:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:56:57 localhost openstack_network_exporter[241668]: ERROR 09:56:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:56:57 localhost openstack_network_exporter[241668]: ERROR 09:56:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:56:57 localhost openstack_network_exporter[241668]: Dec 5 04:56:57 localhost openstack_network_exporter[241668]: ERROR 09:56:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:56:57 localhost openstack_network_exporter[241668]: Dec 5 04:56:57 localhost nova_compute[280228]: 2025-12-05 09:56:57.553 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:58 localhost podman[293969]: 2025-12-05 09:56:58.35154483 +0000 UTC m=+0.077232187 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, release=1763362218) Dec 5 04:56:58 localhost podman[293969]: 2025-12-05 09:56:58.443947672 +0000 UTC m=+0.169635049 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph) Dec 5 04:56:59 localhost ceph-mon[292820]: [05/Dec/2025:09:56:57] ENGINE Bus STARTING Dec 5 04:56:59 localhost ceph-mon[292820]: [05/Dec/2025:09:56:58] ENGINE Serving on http://172.18.0.104:8765 Dec 5 04:56:59 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:59 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:56:59 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 5 04:56:59 localhost systemd[290856]: Activating special unit Exit the Session... Dec 5 04:56:59 localhost systemd[290856]: Stopped target Main User Target. Dec 5 04:56:59 localhost systemd[290856]: Stopped target Basic System. Dec 5 04:56:59 localhost systemd[290856]: Stopped target Paths. Dec 5 04:56:59 localhost systemd[290856]: Stopped target Sockets. Dec 5 04:56:59 localhost systemd[290856]: Stopped target Timers. Dec 5 04:56:59 localhost systemd[290856]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 5 04:56:59 localhost systemd[290856]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 04:56:59 localhost systemd[290856]: Closed D-Bus User Message Bus Socket. Dec 5 04:56:59 localhost systemd[290856]: Stopped Create User's Volatile Files and Directories. Dec 5 04:56:59 localhost systemd[290856]: Removed slice User Application Slice. Dec 5 04:56:59 localhost systemd[290856]: Reached target Shutdown. Dec 5 04:56:59 localhost systemd[290856]: Finished Exit the Session. Dec 5 04:56:59 localhost systemd[290856]: Reached target Exit the Session. Dec 5 04:56:59 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 5 04:56:59 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 5 04:56:59 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 5 04:56:59 localhost podman[294105]: 2025-12-05 09:56:59.370309466 +0000 UTC m=+0.100706672 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:56:59 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 5 04:56:59 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 5 04:56:59 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 5 04:56:59 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 5 04:56:59 localhost systemd[1]: user-1003.slice: Consumed 2.308s CPU time. Dec 5 04:56:59 localhost podman[294105]: 2025-12-05 09:56:59.408727077 +0000 UTC m=+0.139124293 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:56:59 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:56:59 localhost systemd[1]: tmp-crun.YgRF3W.mount: Deactivated successfully. Dec 5 04:56:59 localhost podman[294146]: 2025-12-05 09:56:59.48450109 +0000 UTC m=+0.090192466 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:56:59 localhost podman[294146]: 2025-12-05 09:56:59.540654144 +0000 UTC m=+0.146345480 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 04:56:59 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:56:59 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:56:59 localhost nova_compute[280228]: 2025-12-05 09:56:59.902 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:00 localhost ceph-mon[292820]: [05/Dec/2025:09:56:58] ENGINE Serving on https://172.18.0.104:7150 Dec 5 04:57:00 localhost ceph-mon[292820]: [05/Dec/2025:09:56:58] ENGINE Bus STARTED Dec 5 04:57:00 localhost ceph-mon[292820]: [05/Dec/2025:09:56:58] ENGINE Client ('172.18.0.104', 47164) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:00 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:57:01 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:57:02 localhost ceph-mon[292820]: Saving service mon spec with placement label:mon Dec 5 04:57:02 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 04:57:02 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:02 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 04:57:02 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:02 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 04:57:02 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:02 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:02 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:02 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:02 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:02 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:02 localhost nova_compute[280228]: 2025-12-05 09:57:02.596 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:03 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 5 04:57:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3434603813' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 5 04:57:03 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 04:57:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2465703393' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 04:57:03 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 04:57:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2465703393' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 04:57:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:57:03.903 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:57:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:57:03.904 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:57:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:57:03.905 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:57:04 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:04 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:04 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:04 localhost nova_compute[280228]: 2025-12-05 09:57:04.932 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:05 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:05 localhost ceph-mon[292820]: Reconfiguring mon.np0005546416 (monmap changed)... Dec 5 04:57:05 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546416 on np0005546416.localdomain Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.527 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.529 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.529 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:57:05 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 04:57:05 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/662041042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 04:57:05 localhost nova_compute[280228]: 2025-12-05 09:57:05.923 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.079 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.080 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.306 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.307 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11794MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.308 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.309 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.407 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.407 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.408 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:06 localhost ceph-mon[292820]: Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:57:06 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:57:06 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.475 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.882 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.889 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.902 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.904 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:57:06 localhost nova_compute[280228]: 2025-12-05 09:57:06.905 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:57:06 localhost podman[295009]: Dec 5 04:57:06 localhost podman[295009]: 2025-12-05 09:57:06.931723247 +0000 UTC m=+0.079080953 container create ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_johnson, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, maintainer=Guillaume Abrioux ) Dec 5 04:57:06 localhost systemd[1]: Started libpod-conmon-ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789.scope. Dec 5 04:57:06 localhost systemd[1]: Started libcrun container. Dec 5 04:57:06 localhost podman[295009]: 2025-12-05 09:57:06.89948741 +0000 UTC m=+0.046845136 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:07 localhost podman[295009]: 2025-12-05 09:57:07.009230631 +0000 UTC m=+0.156588327 container init ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_johnson, version=7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:57:07 localhost podman[295009]: 2025-12-05 09:57:07.020497169 +0000 UTC m=+0.167854865 container start ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_johnson, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:57:07 localhost podman[295009]: 2025-12-05 09:57:07.020790099 +0000 UTC m=+0.168147835 container attach ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_johnson, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container) Dec 5 04:57:07 localhost angry_johnson[295025]: 167 167 Dec 5 04:57:07 localhost systemd[1]: libpod-ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789.scope: Deactivated successfully. Dec 5 04:57:07 localhost podman[295009]: 2025-12-05 09:57:07.026005434 +0000 UTC m=+0.173363160 container died ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_johnson, name=rhceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, architecture=x86_64, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:57:07 localhost podman[295030]: 2025-12-05 09:57:07.13255688 +0000 UTC m=+0.097747112 container remove ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_johnson, RELEASE=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z) Dec 5 04:57:07 localhost systemd[1]: libpod-conmon-ac8ad6c9607633b3cca1c222d7bbb3c93b65ddaa28eb65c1902ff75ef33e7789.scope: Deactivated successfully. Dec 5 04:57:07 localhost nova_compute[280228]: 2025-12-05 09:57:07.622 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:07 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:57:07 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:07 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:07 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:07 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:07 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:07 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:07 localhost systemd[1]: tmp-crun.pj45ur.mount: Deactivated successfully. Dec 5 04:57:07 localhost systemd[1]: var-lib-containers-storage-overlay-3f4e310c4dd4a5bd5b7dba03939375e00fd66fb59c87a0628b985fe7467909ef-merged.mount: Deactivated successfully. Dec 5 04:57:07 localhost podman[295105]: Dec 5 04:57:07 localhost podman[295105]: 2025-12-05 09:57:07.996060019 +0000 UTC m=+0.090416304 container create 157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_maxwell, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, distribution-scope=public, vcs-type=git, ceph=True, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=) Dec 5 04:57:08 localhost systemd[1]: Started libpod-conmon-157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d.scope. Dec 5 04:57:08 localhost systemd[1]: Started libcrun container. Dec 5 04:57:08 localhost podman[295105]: 2025-12-05 09:57:07.95610852 +0000 UTC m=+0.050464845 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:08 localhost podman[295105]: 2025-12-05 09:57:08.055606744 +0000 UTC m=+0.149963009 container init 157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_maxwell, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, description=Red Hat Ceph Storage 7) Dec 5 04:57:08 localhost podman[295105]: 2025-12-05 09:57:08.066747729 +0000 UTC m=+0.161104034 container start 157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_maxwell, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Dec 5 04:57:08 localhost podman[295105]: 2025-12-05 09:57:08.067017207 +0000 UTC m=+0.161373492 container attach 157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_maxwell, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main) Dec 5 04:57:08 localhost wizardly_maxwell[295119]: 167 167 Dec 5 04:57:08 localhost systemd[1]: libpod-157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d.scope: Deactivated successfully. Dec 5 04:57:08 localhost podman[295105]: 2025-12-05 09:57:08.080360667 +0000 UTC m=+0.174716962 container died 157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_maxwell, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:57:08 localhost podman[295124]: 2025-12-05 09:57:08.181982565 +0000 UTC m=+0.089561618 container remove 157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_maxwell, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:57:08 localhost systemd[1]: libpod-conmon-157f9fa36dc80255cb4281d7ee68865cfee53511396ab63c9ff65ad8bd82f38d.scope: Deactivated successfully. Dec 5 04:57:08 localhost nova_compute[280228]: 2025-12-05 09:57:08.904 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:08 localhost nova_compute[280228]: 2025-12-05 09:57:08.926 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:08 localhost nova_compute[280228]: 2025-12-05 09:57:08.926 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:57:08 localhost nova_compute[280228]: 2025-12-05 09:57:08.927 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:57:08 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:57:08 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:57:08 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:08 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:08 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:08 localhost systemd[1]: tmp-crun.45Pqiw.mount: Deactivated successfully. Dec 5 04:57:08 localhost systemd[1]: var-lib-containers-storage-overlay-97d7e99cb00e9657b94040740357caa78159137be64604582ca4e4d291be5e97-merged.mount: Deactivated successfully. Dec 5 04:57:08 localhost podman[295196]: Dec 5 04:57:08 localhost podman[295196]: 2025-12-05 09:57:08.98703987 +0000 UTC m=+0.082879367 container create 2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_cori, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux , release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7) Dec 5 04:57:09 localhost systemd[1]: Started libpod-conmon-2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7.scope. Dec 5 04:57:09 localhost systemd[1]: Started libcrun container. Dec 5 04:57:09 localhost podman[295196]: 2025-12-05 09:57:08.956349719 +0000 UTC m=+0.052189246 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:09 localhost podman[295196]: 2025-12-05 09:57:09.064475252 +0000 UTC m=+0.160314729 container init 2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_cori, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Dec 5 04:57:09 localhost podman[295196]: 2025-12-05 09:57:09.075365119 +0000 UTC m=+0.171204616 container start 2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_cori, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph) Dec 5 04:57:09 localhost podman[295196]: 2025-12-05 09:57:09.075645887 +0000 UTC m=+0.171485364 container attach 2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_cori, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Dec 5 04:57:09 localhost silly_cori[295211]: 167 167 Dec 5 04:57:09 localhost systemd[1]: libpod-2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7.scope: Deactivated successfully. Dec 5 04:57:09 localhost podman[295196]: 2025-12-05 09:57:09.07941002 +0000 UTC m=+0.175249537 container died 2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_cori, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, vcs-type=git, release=1763362218, build-date=2025-11-26T19:44:28Z, architecture=x86_64, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Dec 5 04:57:09 localhost podman[295216]: 2025-12-05 09:57:09.191389428 +0000 UTC m=+0.100416652 container remove 2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_cori, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=1763362218, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4) Dec 5 04:57:09 localhost systemd[1]: libpod-conmon-2c6ca61ae8e1d91c4fafb3e7ebdd90a785d8a9fe1eba1fb20a91df27333d66d7.scope: Deactivated successfully. Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.288 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.290 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.290 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.291 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.705 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.722 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.723 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.724 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.724 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.725 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr handle_mgr_map Activating! Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr handle_mgr_map I am now activating Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e88 e88: 6 total, 6 up, 6 in Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546416"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546416"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546418"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon).mds e16 all = 0 Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon).mds e16 all = 0 Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon).mds e16 all = 0 Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546418.garyvl", "id": "np0005546418.garyvl"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon).mds e16 all = 1 Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata"} : dispatch Dec 5 04:57:09 localhost ceph-mgr[286454]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: balancer Dec 5 04:57:09 localhost ceph-mgr[286454]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: [balancer INFO root] Starting Dec 5 04:57:09 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_09:57:09 Dec 5 04:57:09 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 04:57:09 localhost ceph-mgr[286454]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Dec 5 04:57:09 localhost systemd[1]: session-67.scope: Deactivated successfully. Dec 5 04:57:09 localhost systemd[1]: session-67.scope: Consumed 8.923s CPU time. Dec 5 04:57:09 localhost systemd-logind[760]: Session 67 logged out. Waiting for processes to exit. Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: cephadm Dec 5 04:57:09 localhost ceph-mgr[286454]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: crash Dec 5 04:57:09 localhost ceph-mgr[286454]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: devicehealth Dec 5 04:57:09 localhost systemd-logind[760]: Removed session 67. Dec 5 04:57:09 localhost ceph-mgr[286454]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: iostat Dec 5 04:57:09 localhost ceph-mgr[286454]: [devicehealth INFO root] Starting Dec 5 04:57:09 localhost ceph-mgr[286454]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: nfs Dec 5 04:57:09 localhost ceph-mgr[286454]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: orchestrator Dec 5 04:57:09 localhost ceph-mgr[286454]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: pg_autoscaler Dec 5 04:57:09 localhost ceph-mgr[286454]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: progress Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: [progress INFO root] Loading... Dec 5 04:57:09 localhost ceph-mgr[286454]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Dec 5 04:57:09 localhost ceph-mgr[286454]: [progress INFO root] Loaded OSDMap, ready. Dec 5 04:57:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] recovery thread starting Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] starting setup Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: rbd_support Dec 5 04:57:09 localhost ceph-mgr[286454]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: restful Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} v 0) Dec 5 04:57:09 localhost ceph-mgr[286454]: [restful INFO root] server_addr: :: server_port: 8003 Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch Dec 5 04:57:09 localhost ceph-mgr[286454]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: status Dec 5 04:57:09 localhost ceph-mgr[286454]: [restful WARNING root] server not running: no certificate configured Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: telemetry Dec 5 04:57:09 localhost ceph-mgr[286454]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 04:57:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 04:57:09 localhost ceph-mgr[286454]: mgr load Constructed class from module: volumes Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Dec 5 04:57:09 localhost systemd[1]: var-lib-containers-storage-overlay-f84065a340607eb6e7fc92930d87d2ad6e18cf699bb1960a3a3557676d2116f4-merged.mount: Deactivated successfully. Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] PerfHandler: starting Dec 5 04:57:09 localhost nova_compute[280228]: 2025-12-05 09:57:09.971 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: vms, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.973+0000 7f7cc674c640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.973+0000 7f7cc674c640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.973+0000 7f7cc674c640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.973+0000 7f7cc674c640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.973+0000 7f7cc674c640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: volumes, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.976+0000 7f7cc2f45640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.976+0000 7f7cc2f45640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.976+0000 7f7cc2f45640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.976+0000 7f7cc2f45640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:57:09.976+0000 7f7cc2f45640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 04:57:09 localhost ceph-mon[292820]: Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:57:09 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:57:09 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:09 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' Dec 5 04:57:09 localhost ceph-mon[292820]: from='mgr.24104 172.18.0.104:0/853844173' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: from='mgr.24104 ' entity='mgr.np0005546416.kmqcnq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/1461736714' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: Activating manager daemon np0005546419.zhsnqq Dec 5 04:57:09 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/1461736714' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 04:57:09 localhost ceph-mon[292820]: Manager daemon np0005546419.zhsnqq is now available Dec 5 04:57:09 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch Dec 5 04:57:09 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: images, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: backups, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] TaskHandler: starting Dec 5 04:57:09 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} v 0) Dec 5 04:57:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Dec 5 04:57:09 localhost ceph-mgr[286454]: [rbd_support INFO root] setup complete Dec 5 04:57:10 localhost sshd[295373]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:57:10 localhost systemd-logind[760]: New session 69 of user ceph-admin. Dec 5 04:57:10 localhost systemd[1]: Started Session 69 of User ceph-admin. Dec 5 04:57:10 localhost nova_compute[280228]: 2025-12-05 09:57:10.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:57:10 localhost nova_compute[280228]: 2025-12-05 09:57:10.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:57:10 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:57:10] ENGINE Bus STARTING Dec 5 04:57:10 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:57:10] ENGINE Bus STARTING Dec 5 04:57:10 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:57:10] ENGINE Serving on http://172.18.0.106:8765 Dec 5 04:57:10 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:57:10] ENGINE Serving on http://172.18.0.106:8765 Dec 5 04:57:11 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch Dec 5 04:57:11 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch Dec 5 04:57:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:11 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:57:11] ENGINE Serving on https://172.18.0.106:7150 Dec 5 04:57:11 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:57:11] ENGINE Serving on https://172.18.0.106:7150 Dec 5 04:57:11 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:57:11] ENGINE Bus STARTED Dec 5 04:57:11 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:57:11] ENGINE Bus STARTED Dec 5 04:57:11 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:09:57:11] ENGINE Client ('172.18.0.106', 58674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 04:57:11 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:09:57:11] ENGINE Client ('172.18.0.106', 58674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 04:57:11 localhost systemd[1]: tmp-crun.KOoQ5T.mount: Deactivated successfully. Dec 5 04:57:11 localhost podman[295511]: 2025-12-05 09:57:11.295702192 +0000 UTC m=+0.100801454 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:57:11 localhost podman[295511]: 2025-12-05 09:57:11.426747871 +0000 UTC m=+0.231847153 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:57:11 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0) Dec 5 04:57:11 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0) Dec 5 04:57:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:11 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0) Dec 5 04:57:11 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0) Dec 5 04:57:12 localhost ceph-mon[292820]: [05/Dec/2025:09:57:10] ENGINE Bus STARTING Dec 5 04:57:12 localhost ceph-mon[292820]: [05/Dec/2025:09:57:10] ENGINE Serving on http://172.18.0.106:8765 Dec 5 04:57:12 localhost ceph-mon[292820]: [05/Dec/2025:09:57:11] ENGINE Serving on https://172.18.0.106:7150 Dec 5 04:57:12 localhost ceph-mon[292820]: [05/Dec/2025:09:57:11] ENGINE Bus STARTED Dec 5 04:57:12 localhost ceph-mon[292820]: [05/Dec/2025:09:57:11] ENGINE Client ('172.18.0.106', 58674) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 04:57:12 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:12 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:12 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:12 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:12 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:57:12 localhost ceph-mgr[286454]: [devicehealth INFO root] Check health Dec 5 04:57:12 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:57:12 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:57:12 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:57:12 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:12 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:12 localhost nova_compute[280228]: 2025-12-05 09:57:12.661 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:57:13 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 04:57:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 5 04:57:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 04:57:13 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:13 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 04:57:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 04:57:13 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 04:57:13 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 5 04:57:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 5 04:57:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost podman[295763]: 2025-12-05 09:57:14.214932636 +0000 UTC m=+0.091651530 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible) Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd/host:np0005546416", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:57:14 localhost podman[295763]: 2025-12-05 09:57:14.235036789 +0000 UTC m=+0.111755683 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.) Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 04:57:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:14 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:57:14 localhost systemd[1]: tmp-crun.oZBc4y.mount: Deactivated successfully. Dec 5 04:57:14 localhost podman[295837]: 2025-12-05 09:57:14.600045146 +0000 UTC m=+0.094121025 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd) Dec 5 04:57:14 localhost podman[295837]: 2025-12-05 09:57:14.641608772 +0000 UTC m=+0.135684601 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:57:14 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:57:14 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:14 localhost nova_compute[280228]: 2025-12-05 09:57:14.994 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:15 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mgr.np0005546416.kmqcnq 172.18.0.104:0/3040901304; not ready for session (expect reconnect) Dec 5 04:57:15 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 04:57:15 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:15 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 04:57:15 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:15 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:57:15 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:57:15 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:57:15 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:15 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} v 0) Dec 5 04:57:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546416.kmqcnq", "id": "np0005546416.kmqcnq"} : dispatch Dec 5 04:57:15 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:15 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 04:57:16 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0) Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0) Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain.devices.0}] v 0) Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546416.localdomain}] v 0) Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:57:16 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 04:57:17 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev bd0023e7-e026-4e61-870b-402244d9ea1b (Updating node-proxy deployment (+5 -> 5)) Dec 5 04:57:17 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev bd0023e7-e026-4e61-870b-402244d9ea1b (Updating node-proxy deployment (+5 -> 5)) Dec 5 04:57:17 localhost ceph-mgr[286454]: [progress INFO root] Completed event bd0023e7-e026-4e61-870b-402244d9ea1b (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 04:57:17 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:57:17 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:17 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:17 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:57:17 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:57:17 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:17 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:17 localhost nova_compute[280228]: 2025-12-05 09:57:17.696 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 0 B/s wr, 17 op/s Dec 5 04:57:18 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:18 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:18 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 5 04:57:18 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 5 04:57:18 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 5 04:57:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:57:18 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:18 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:57:18 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:57:18 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:57:18 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:57:18 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:18 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:18 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:19 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 5 04:57:19 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 5 04:57:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:19 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:57:19 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:57:19 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:57:19 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:57:19 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:19 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:19 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:19 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:19 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 0 B/s wr, 13 op/s Dec 5 04:57:19 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 04:57:19 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 04:57:19 localhost podman[239519]: time="2025-12-05T09:57:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:57:19 localhost podman[239519]: @ - - [05/Dec/2025:09:57:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:57:19 localhost podman[239519]: @ - - [05/Dec/2025:09:57:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18712 "" "Go-http-client/1.1" Dec 5 04:57:20 localhost nova_compute[280228]: 2025-12-05 09:57:20.025 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:20 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:20 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:20 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:20 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:20 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:57:20 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:57:20 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 04:57:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:20 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:20 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:57:20 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.853484) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928640853608, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11972, "num_deletes": 262, "total_data_size": 23010213, "memory_usage": 24234592, "flush_reason": "Manual Compaction"} Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 5 04:57:20 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:57:20 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928640964186, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 17544374, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11977, "table_properties": {"data_size": 17485094, "index_size": 32166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 274837, "raw_average_key_size": 26, "raw_value_size": 17307573, "raw_average_value_size": 1664, "num_data_blocks": 1222, "num_entries": 10398, "num_filter_entries": 10398, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 1764928569, "file_creation_time": 1764928640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 110814 microseconds, and 36128 cpu microseconds. Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.964301) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 17544374 bytes OK Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.964329) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.981876) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.981909) EVENT_LOG_v1 {"time_micros": 1764928640981901, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.981931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 22931535, prev total WAL file size 22948362, number of live WAL files 2. Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.985788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303139' seq:72057594037927935, type:22 .. '6B760031323734' seq:0, type:0; will stop at (end) Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(16MB) 8(2012B)] Dec 5 04:57:20 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928640985969, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 17546386, "oldest_snapshot_seqno": -1} Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10143 keys, 17540997 bytes, temperature: kUnknown Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928641119306, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 17540997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17482419, "index_size": 32109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 270913, "raw_average_key_size": 26, "raw_value_size": 17308228, "raw_average_value_size": 1706, "num_data_blocks": 1221, "num_entries": 10143, "num_filter_entries": 10143, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928640, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:21.119588) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 17540997 bytes Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:21.121218) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.5 rd, 131.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(16.7, 0.0 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10403, records dropped: 260 output_compression: NoCompression Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:21.121240) EVENT_LOG_v1 {"time_micros": 1764928641121230, "job": 4, "event": "compaction_finished", "compaction_time_micros": 133419, "compaction_time_cpu_micros": 52559, "output_level": 6, "num_output_files": 1, "total_output_size": 17540997, "num_input_records": 10403, "num_output_records": 10143, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928641122806, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928641122844, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 5 04:57:21 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:20.985651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26900 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 5 04:57:21 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:21 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:21 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:57:21 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:57:21 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:57:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:21 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 5 04:57:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch Dec 5 04:57:21 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:21 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:57:21 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:57:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Dec 5 04:57:21 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:57:21 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:57:21 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:21 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:21 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:21 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:57:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:57:22 localhost podman[296461]: 2025-12-05 09:57:22.196510748 +0000 UTC m=+0.078258818 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:57:22 localhost podman[296462]: 2025-12-05 09:57:22.257831478 +0000 UTC m=+0.131514666 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 04:57:22 localhost podman[296464]: 2025-12-05 09:57:22.314843437 +0000 UTC m=+0.186615358 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:57:22 localhost podman[296464]: 2025-12-05 09:57:22.330427265 +0000 UTC m=+0.202199256 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:57:22 localhost podman[296462]: 2025-12-05 09:57:22.34363363 +0000 UTC m=+0.217316798 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 04:57:22 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:57:22 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:57:22 localhost podman[296461]: 2025-12-05 09:57:22.386004181 +0000 UTC m=+0.267752281 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:57:22 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:57:22 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:22 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:22 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:57:22 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:57:22 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 5 04:57:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:22 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 5 04:57:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 5 04:57:22 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:22 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:57:22 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:57:22 localhost nova_compute[280228]: 2025-12-05 09:57:22.750 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:22 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:57:22 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:57:22 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:22 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:22 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:23 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:57:23 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:57:23 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:57:23 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:57:23 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:57:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:23 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:23 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:57:23 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:57:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 5 04:57:23 localhost ceph-mon[292820]: Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:57:23 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:57:23 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:23 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:23 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:23 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.44269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546416", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 5 04:57:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:57:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:57:24 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 5 04:57:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 5 04:57:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 5 04:57:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:57:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:57:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:57:24 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:57:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:57:24 localhost ceph-mon[292820]: mon.np0005546419@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:24 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:57:24 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:57:24 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:24 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:24 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:57:25 localhost nova_compute[280228]: 2025-12-05 09:57:25.026 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:25 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26991 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546416"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:57:25 localhost ceph-mgr[286454]: [cephadm INFO root] Remove daemons mon.np0005546416 Dec 5 04:57:25 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005546416 Dec 5 04:57:25 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "quorum_status"} v 0) Dec 5 04:57:25 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "quorum_status"} : dispatch Dec 5 04:57:25 localhost ceph-mgr[286454]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005546416: new quorum should be ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419']) Dec 5 04:57:25 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005546416: new quorum should be ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419']) Dec 5 04:57:25 localhost ceph-mgr[286454]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005546416 from monmap... Dec 5 04:57:25 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing monitor np0005546416 from monmap... Dec 5 04:57:25 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e9 handle_command mon_command({"prefix": "mon rm", "name": "np0005546416"} v 0) Dec 5 04:57:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon rm", "name": "np0005546416"} : dispatch Dec 5 04:57:25 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005546416 from np0005546416.localdomain -- ports [] Dec 5 04:57:25 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005546416 from np0005546416.localdomain -- ports [] Dec 5 04:57:25 localhost ceph-mon[292820]: mon.np0005546419@4(peon) e10 my rank is now 3 (was 4) Dec 5 04:57:25 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:57:25 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:57:25 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:57:25 localhost ceph-mon[292820]: paxos.3).electionLogic(44) init, last seen epoch 44 Dec 5 04:57:25 localhost ceph-mon[292820]: mon.np0005546419@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:57:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 5 04:57:27 localhost openstack_network_exporter[241668]: ERROR 09:57:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:57:27 localhost openstack_network_exporter[241668]: ERROR 09:57:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:57:27 localhost openstack_network_exporter[241668]: ERROR 09:57:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:57:27 localhost openstack_network_exporter[241668]: ERROR 09:57:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:57:27 localhost openstack_network_exporter[241668]: Dec 5 04:57:27 localhost openstack_network_exporter[241668]: ERROR 09:57:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:57:27 localhost openstack_network_exporter[241668]: Dec 5 04:57:27 localhost ceph-mon[292820]: mon.np0005546419@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:57:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 5 04:57:27 localhost nova_compute[280228]: 2025-12-05 09:57:27.782 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:30 localhost nova_compute[280228]: 2025-12-05 09:57:30.067 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:57:30 localhost podman[296518]: 2025-12-05 09:57:30.21146111 +0000 UTC m=+0.089582848 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:57:30 localhost systemd[1]: tmp-crun.d4zF00.mount: Deactivated successfully. Dec 5 04:57:30 localhost podman[296519]: 2025-12-05 09:57:30.265791169 +0000 UTC m=+0.139515976 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:57:30 localhost podman[296519]: 2025-12-05 09:57:30.276181801 +0000 UTC m=+0.149906578 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:57:30 localhost podman[296518]: 2025-12-05 09:57:30.276457469 +0000 UTC m=+0.154579187 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:57:30 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:57:30 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:57:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:32 localhost ceph-mds[283215]: mds.beacon.mds.np0005546419.rweotn missed beacon ack from the monitors Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546419@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546419@3(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546419@3(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.755992) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652756044, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 521, "num_deletes": 252, "total_data_size": 533059, "memory_usage": 543688, "flush_reason": "Manual Compaction"} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 5 04:57:32 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:57:32 localhost ceph-mon[292820]: Remove daemons mon.np0005546416 Dec 5 04:57:32 localhost ceph-mon[292820]: Safe to remove mon.np0005546416: new quorum should be ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546420', 'np0005546419']) Dec 5 04:57:32 localhost ceph-mon[292820]: Removing monitor np0005546416 from monmap... Dec 5 04:57:32 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon rm", "name": "np0005546416"} : dispatch Dec 5 04:57:32 localhost ceph-mon[292820]: Removing daemon mon.np0005546416 from np0005546416.localdomain -- ports [] Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419 in quorum (ranks 0,1,3) Dec 5 04:57:32 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:57:32 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546420,np0005546419 in quorum (ranks 0,1,2,3) Dec 5 04:57:32 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:57:32 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:32 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652762776, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 321555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11982, "largest_seqno": 12498, "table_properties": {"data_size": 318597, "index_size": 877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8660, "raw_average_key_size": 21, "raw_value_size": 312070, "raw_average_value_size": 778, "num_data_blocks": 35, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928640, "oldest_key_time": 1764928640, "file_creation_time": 1764928652, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6844 microseconds, and 1946 cpu microseconds. Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.762832) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 321555 bytes OK Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.762859) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.764471) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.764493) EVENT_LOG_v1 {"time_micros": 1764928652764486, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.764516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 529808, prev total WAL file size 529808, number of live WAL files 2. Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765055) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(314KB)], [15(16MB)] Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652765102, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 17862552, "oldest_snapshot_seqno": -1} Dec 5 04:57:32 localhost nova_compute[280228]: 2025-12-05 09:57:32.824 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10017 keys, 15762338 bytes, temperature: kUnknown Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652869520, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 15762338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15706379, "index_size": 29810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 269226, "raw_average_key_size": 26, "raw_value_size": 15536172, "raw_average_value_size": 1550, "num_data_blocks": 1119, "num_entries": 10017, "num_filter_entries": 10017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928652, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.869862) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 15762338 bytes Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.871539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.9 rd, 150.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.7 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(104.6) write-amplify(49.0) OK, records in: 10544, records dropped: 527 output_compression: NoCompression Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.871572) EVENT_LOG_v1 {"time_micros": 1764928652871558, "job": 6, "event": "compaction_finished", "compaction_time_micros": 104534, "compaction_time_cpu_micros": 44906, "output_level": 6, "num_output_files": 1, "total_output_size": 15762338, "num_input_records": 10544, "num_output_records": 10017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652871772, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928652874359, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.765007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.874469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.874475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.874479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.874482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:32 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:57:32.874485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:57:32 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 5 04:57:32 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 5 04:57:32 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:57:32 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:57:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:33 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:33 localhost ceph-mon[292820]: Reconfiguring osd.5 (monmap changed)... Dec 5 04:57:33 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:33 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:57:33 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:57:34 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:57:34 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:57:34 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:57:34 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:57:34 localhost ceph-mon[292820]: mon.np0005546419@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:35 localhost nova_compute[280228]: 2025-12-05 09:57:35.068 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:35 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:35 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:35 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:35 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:57:35 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:35 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:35 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:35 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:57:35 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:57:35 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:57:35 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:57:35 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:57:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26908 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546416.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:57:35 localhost ceph-mgr[286454]: [cephadm INFO root] Removed label mon from host np0005546416.localdomain Dec 5 04:57:35 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removed label mon from host np0005546416.localdomain Dec 5 04:57:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:36 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:36 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:36 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:57:36 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:36 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:36 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:57:36 localhost ceph-mon[292820]: Removed label mon from host np0005546416.localdomain Dec 5 04:57:36 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:36 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:57:36 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:57:36 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:57:36 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:57:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26916 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546416.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:57:36 localhost ceph-mgr[286454]: [cephadm INFO root] Removed label mgr from host np0005546416.localdomain Dec 5 04:57:36 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005546416.localdomain Dec 5 04:57:37 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:37 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:37 localhost ceph-mon[292820]: Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:57:37 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:37 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:57:37 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:37 localhost ceph-mon[292820]: Removed label mgr from host np0005546416.localdomain Dec 5 04:57:37 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:37 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:37 localhost nova_compute[280228]: 2025-12-05 09:57:37.878 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.44279 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005546416.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO root] Removed label _admin from host np0005546416.localdomain Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005546416.localdomain Dec 5 04:57:38 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Removing np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:38 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:38 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:39 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mon[292820]: mon.np0005546419@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:39 localhost ceph-mon[292820]: Removed label _admin from host np0005546416.localdomain Dec 5 04:57:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:39 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:57:39 localhost ceph-mon[292820]: Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:39 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:39 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:39 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:39 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:57:39 localhost ceph-mon[292820]: Removing np0005546416.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:57:39 localhost ceph-mon[292820]: Removing np0005546416.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:57:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:57:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:57:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:57:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:57:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:57:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:57:40 localhost nova_compute[280228]: 2025-12-05 09:57:40.069 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:40 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 763912f6-c1ee-4dbe-aa66-7393f3927821 (Updating mgr deployment (-1 -> 4)) Dec 5 04:57:40 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005546416.kmqcnq from np0005546416.localdomain -- ports [8765] Dec 5 04:57:40 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005546416.kmqcnq from np0005546416.localdomain -- ports [8765] Dec 5 04:57:41 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:41 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:41 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:41 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:42 localhost ceph-mon[292820]: Removing daemon mgr.np0005546416.kmqcnq from np0005546416.localdomain -- ports [8765] Dec 5 04:57:42 localhost ceph-mgr[286454]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005546416.kmqcnq Dec 5 04:57:42 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005546416.kmqcnq Dec 5 04:57:42 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 763912f6-c1ee-4dbe-aa66-7393f3927821 (Updating mgr deployment (-1 -> 4)) Dec 5 04:57:42 localhost ceph-mgr[286454]: [progress INFO root] Completed event 763912f6-c1ee-4dbe-aa66-7393f3927821 (Updating mgr deployment (-1 -> 4)) in 2 seconds Dec 5 04:57:42 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 8c579ca2-0694-41e4-9c45-d0a2eb42dd81 (Updating node-proxy deployment (+5 -> 5)) Dec 5 04:57:42 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 8c579ca2-0694-41e4-9c45-d0a2eb42dd81 (Updating node-proxy deployment (+5 -> 5)) Dec 5 04:57:42 localhost ceph-mgr[286454]: [progress INFO root] Completed event 8c579ca2-0694-41e4-9c45-d0a2eb42dd81 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Dec 5 04:57:42 localhost nova_compute[280228]: 2025-12-05 09:57:42.917 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:43 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"} : dispatch Dec 5 04:57:43 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"} : dispatch Dec 5 04:57:43 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005546416.kmqcnq"}]': finished Dec 5 04:57:43 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:43 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:44 localhost ceph-mon[292820]: Removing key for mgr.np0005546416.kmqcnq Dec 5 04:57:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:57:44 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 7848df5f-1257-4ee2-b564-e038b5ff782a (Updating node-proxy deployment (+5 -> 5)) Dec 5 04:57:44 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 7848df5f-1257-4ee2-b564-e038b5ff782a (Updating node-proxy deployment (+5 -> 5)) Dec 5 04:57:44 localhost ceph-mgr[286454]: [progress INFO root] Completed event 7848df5f-1257-4ee2-b564-e038b5ff782a (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Dec 5 04:57:44 localhost podman[296904]: 2025-12-05 09:57:44.391407036 +0000 UTC m=+0.075929075 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41) Dec 5 04:57:44 localhost podman[296904]: 2025-12-05 09:57:44.410178325 +0000 UTC m=+0.094700364 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:57:44 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:57:44 localhost ceph-mon[292820]: mon.np0005546419@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:44 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546416 (monmap changed)... Dec 5 04:57:44 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546416 (monmap changed)... Dec 5 04:57:44 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain Dec 5 04:57:44 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain Dec 5 04:57:44 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 04:57:45 localhost nova_compute[280228]: 2025-12-05 09:57:45.072 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:57:45 localhost podman[296942]: 2025-12-05 09:57:45.186465163 +0000 UTC m=+0.074945343 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 5 04:57:45 localhost podman[296942]: 2025-12-05 09:57:45.197511754 +0000 UTC m=+0.085991934 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:57:45 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:45 localhost ceph-mon[292820]: Reconfiguring crash.np0005546416 (monmap changed)... Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:45 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546416 on np0005546416.localdomain Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546416.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:45 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:45 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:57:45 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:57:45 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:57:45 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:57:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:46 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:46 localhost ceph-mon[292820]: Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:57:46 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:46 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:46 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:57:46 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:57:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:57:46 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:57:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:57:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:47 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:57:47 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:57:47 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:47 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:57:47 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:47 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:47 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:47 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:57:47 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:47 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:57:47 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:57:47 localhost nova_compute[280228]: 2025-12-05 09:57:47.957 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:48 localhost ceph-mon[292820]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:57:48 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:48 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:48 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:57:48 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:48 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:57:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:57:48 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:57:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:57:49 localhost podman[297012]: Dec 5 04:57:49 localhost podman[297012]: 2025-12-05 09:57:49.475600764 +0000 UTC m=+0.058947270 container create d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_clarke, RELEASE=main, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, ceph=True, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:57:49 localhost systemd[1]: Started libpod-conmon-d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107.scope. Dec 5 04:57:49 localhost systemd[1]: Started libcrun container. Dec 5 04:57:49 localhost podman[297012]: 2025-12-05 09:57:49.541819608 +0000 UTC m=+0.125166124 container init d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_clarke, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:57:49 localhost podman[297012]: 2025-12-05 09:57:49.447806837 +0000 UTC m=+0.031153373 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:49 localhost podman[297012]: 2025-12-05 09:57:49.55513159 +0000 UTC m=+0.138478116 container start d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_clarke, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, release=1763362218, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z) Dec 5 04:57:49 localhost objective_clarke[297027]: 167 167 Dec 5 04:57:49 localhost systemd[1]: libpod-d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107.scope: Deactivated successfully. Dec 5 04:57:49 localhost podman[297012]: 2025-12-05 09:57:49.55739573 +0000 UTC m=+0.140742256 container attach d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_clarke, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:57:49 localhost podman[297012]: 2025-12-05 09:57:49.565707596 +0000 UTC m=+0.149054162 container died d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_clarke, version=7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph) Dec 5 04:57:49 localhost podman[297032]: 2025-12-05 09:57:49.676281128 +0000 UTC m=+0.098762659 container remove d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_clarke, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, version=7, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Dec 5 04:57:49 localhost systemd[1]: libpod-conmon-d1d31be1e0d5b85421832e1b73037d40186fdb04e0e980c8b64bff0014635107.scope: Deactivated successfully. Dec 5 04:57:49 localhost ceph-mon[292820]: mon.np0005546419@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:49 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 5 04:57:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 5 04:57:49 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:57:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:57:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26928 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005546416.localdomain", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:57:49 localhost ceph-mgr[286454]: [cephadm INFO root] Added label _no_schedule to host np0005546416.localdomain Dec 5 04:57:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005546416.localdomain Dec 5 04:57:49 localhost podman[239519]: time="2025-12-05T09:57:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:57:49 localhost ceph-mgr[286454]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546416.localdomain Dec 5 04:57:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546416.localdomain Dec 5 04:57:49 localhost podman[239519]: @ - - [05/Dec/2025:09:57:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:49 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:49 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:49 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:49 localhost podman[239519]: @ - - [05/Dec/2025:09:57:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18723 "" "Go-http-client/1.1" Dec 5 04:57:50 localhost nova_compute[280228]: 2025-12-05 09:57:50.074 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:50 localhost podman[297100]: Dec 5 04:57:50 localhost podman[297100]: 2025-12-05 09:57:50.366299063 +0000 UTC m=+0.067384180 container create 53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_diffie, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, CEPH_POINT_RELEASE=, release=1763362218) Dec 5 04:57:50 localhost systemd[1]: Started libpod-conmon-53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5.scope. Dec 5 04:57:50 localhost systemd[1]: Started libcrun container. Dec 5 04:57:50 localhost podman[297100]: 2025-12-05 09:57:50.424545871 +0000 UTC m=+0.125631018 container init 53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_diffie, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4) Dec 5 04:57:50 localhost podman[297100]: 2025-12-05 09:57:50.434094066 +0000 UTC m=+0.135179183 container start 53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_diffie, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:57:50 localhost podman[297100]: 2025-12-05 09:57:50.434369295 +0000 UTC m=+0.135454562 container attach 53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_diffie, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:57:50 localhost adoring_diffie[297116]: 167 167 Dec 5 04:57:50 localhost systemd[1]: libpod-53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5.scope: Deactivated successfully. Dec 5 04:57:50 localhost podman[297100]: 2025-12-05 09:57:50.438219703 +0000 UTC m=+0.139304900 container died 53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_diffie, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1763362218, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main) Dec 5 04:57:50 localhost podman[297100]: 2025-12-05 09:57:50.339389463 +0000 UTC m=+0.040474650 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:50 localhost systemd[1]: var-lib-containers-storage-overlay-3cd57475d23afe315be86852f7000baaabddb28ceca7ebcdb7e277b737abee0e-merged.mount: Deactivated successfully. Dec 5 04:57:50 localhost systemd[1]: var-lib-containers-storage-overlay-ec1cf342f96642bbc004871e0b5d66c4a7d138d4eaf1039a246d704c3de3d38c-merged.mount: Deactivated successfully. Dec 5 04:57:50 localhost podman[297121]: 2025-12-05 09:57:50.66882385 +0000 UTC m=+0.225879422 container remove 53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_diffie, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Dec 5 04:57:50 localhost systemd[1]: libpod-conmon-53c038d28fa026f556005a974d5bbb173da4d9cc5bbd6a0d7c6d233bc27f67e5.scope: Deactivated successfully. Dec 5 04:57:50 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 5 04:57:50 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 5 04:57:50 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:57:50 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:57:50 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:57:50 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:57:50 localhost ceph-mon[292820]: Added label _no_schedule to host np0005546416.localdomain Dec 5 04:57:50 localhost ceph-mon[292820]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546416.localdomain Dec 5 04:57:50 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:50 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:50 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:57:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26936 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005546416.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 5 04:57:51 localhost podman[297198]: Dec 5 04:57:51 localhost podman[297198]: 2025-12-05 09:57:51.536761126 +0000 UTC m=+0.086337475 container create c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_mestorf, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git) Dec 5 04:57:51 localhost systemd[1]: Started libpod-conmon-c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d.scope. Dec 5 04:57:51 localhost podman[297198]: 2025-12-05 09:57:51.505904414 +0000 UTC m=+0.055480773 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:51 localhost systemd[1]: Started libcrun container. Dec 5 04:57:51 localhost podman[297198]: 2025-12-05 09:57:51.637564947 +0000 UTC m=+0.187141276 container init c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_mestorf, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, ceph=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:57:51 localhost systemd[1]: tmp-crun.yWufd3.mount: Deactivated successfully. Dec 5 04:57:51 localhost podman[297198]: 2025-12-05 09:57:51.659437323 +0000 UTC m=+0.209013652 container start c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_mestorf, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, release=1763362218, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7) Dec 5 04:57:51 localhost tender_mestorf[297213]: 167 167 Dec 5 04:57:51 localhost podman[297198]: 2025-12-05 09:57:51.661709043 +0000 UTC m=+0.211285422 container attach c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_mestorf, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:57:51 localhost systemd[1]: libpod-c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d.scope: Deactivated successfully. Dec 5 04:57:51 localhost podman[297198]: 2025-12-05 09:57:51.665326664 +0000 UTC m=+0.214903043 container died c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_mestorf, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, name=rhceph, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7) Dec 5 04:57:51 localhost podman[297218]: 2025-12-05 09:57:51.771001726 +0000 UTC m=+0.091467254 container remove c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_mestorf, build-date=2025-11-26T19:44:28Z, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main) Dec 5 04:57:51 localhost systemd[1]: libpod-conmon-c6e2f1032a23a17e182e2b4ba19f33f546aa09447ca48c5c25126392f1e95e4d.scope: Deactivated successfully. Dec 5 04:57:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:51 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:57:51 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:57:51 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:57:51 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:57:52 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:57:52 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:57:52 localhost systemd[1]: var-lib-containers-storage-overlay-6b205ca90302c15e07f454eef62d0a72fbe64e1bdfb110ada66c366347357674-merged.mount: Deactivated successfully. Dec 5 04:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:57:52 localhost podman[297294]: 2025-12-05 09:57:52.673565771 +0000 UTC m=+0.096998355 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:57:52 localhost podman[297294]: 2025-12-05 09:57:52.710946864 +0000 UTC m=+0.134379498 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:57:52 localhost podman[297296]: 2025-12-05 09:57:52.720595342 +0000 UTC m=+0.139287900 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125) Dec 5 04:57:52 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:57:52 localhost podman[297296]: 2025-12-05 09:57:52.734243504 +0000 UTC m=+0.152936102 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm) Dec 5 04:57:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.44291 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005546416.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:57:52 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:57:52 localhost podman[297322]: Dec 5 04:57:52 localhost podman[297322]: 2025-12-05 09:57:52.783357999 +0000 UTC m=+0.170281327 container create 7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_nobel, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218) Dec 5 04:57:52 localhost ceph-mgr[286454]: [cephadm INFO root] Removed host np0005546416.localdomain Dec 5 04:57:52 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removed host np0005546416.localdomain Dec 5 04:57:52 localhost podman[297322]: 2025-12-05 09:57:52.703934298 +0000 UTC m=+0.090857656 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:52 localhost systemd[1]: Started libpod-conmon-7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53.scope. Dec 5 04:57:52 localhost podman[297295]: 2025-12-05 09:57:52.823381444 +0000 UTC m=+0.246323502 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 04:57:52 localhost podman[297295]: 2025-12-05 09:57:52.833595349 +0000 UTC m=+0.256537437 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 04:57:52 localhost systemd[1]: Started libcrun container. Dec 5 04:57:52 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:57:52 localhost podman[297322]: 2025-12-05 09:57:52.854332339 +0000 UTC m=+0.241255707 container init 7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_nobel, GIT_CLEAN=True, distribution-scope=public, version=7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:57:52 localhost podman[297322]: 2025-12-05 09:57:52.86568487 +0000 UTC m=+0.252608258 container start 7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_nobel, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1763362218, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Dec 5 04:57:52 localhost podman[297322]: 2025-12-05 09:57:52.865894267 +0000 UTC m=+0.252817625 container attach 7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_nobel, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:57:52 localhost condescending_nobel[297369]: 167 167 Dec 5 04:57:52 localhost systemd[1]: libpod-7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53.scope: Deactivated successfully. Dec 5 04:57:52 localhost podman[297322]: 2025-12-05 09:57:52.871476139 +0000 UTC m=+0.258399527 container died 7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_nobel, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, distribution-scope=public, version=7, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 5 04:57:52 localhost nova_compute[280228]: 2025-12-05 09:57:52.960 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:52 localhost podman[297375]: 2025-12-05 09:57:52.969320399 +0000 UTC m=+0.087436841 container remove 7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_nobel, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, name=rhceph, build-date=2025-11-26T19:44:28Z) Dec 5 04:57:52 localhost systemd[1]: libpod-conmon-7bc1e66b477bcc6af9393d06adcb4d91165c66f5c6f8e9f2b1365811bee68a53.scope: Deactivated successfully. Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:52 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:52 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"} : dispatch Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"} : dispatch Dec 5 04:57:52 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain"}]': finished Dec 5 04:57:53 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:57:53 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:57:53 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:57:53 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:57:53 localhost systemd[1]: var-lib-containers-storage-overlay-b12b31d4e61264bf96ba716090e1775c61b62dc65b7f6a9402e7779aca2e313e-merged.mount: Deactivated successfully. Dec 5 04:57:53 localhost podman[297445]: Dec 5 04:57:53 localhost podman[297445]: 2025-12-05 09:57:53.619825664 +0000 UTC m=+0.084109357 container create 76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mahavira, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, io.openshift.expose-services=) Dec 5 04:57:53 localhost podman[297445]: 2025-12-05 09:57:53.579088306 +0000 UTC m=+0.043372019 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:53 localhost systemd[1]: Started libpod-conmon-76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b.scope. Dec 5 04:57:53 localhost systemd[1]: Started libcrun container. Dec 5 04:57:53 localhost podman[297445]: 2025-12-05 09:57:53.737578378 +0000 UTC m=+0.201862041 container init 76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mahavira, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=1763362218, ceph=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:57:53 localhost friendly_mahavira[297460]: 167 167 Dec 5 04:57:53 localhost systemd[1]: libpod-76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b.scope: Deactivated successfully. Dec 5 04:57:53 localhost podman[297445]: 2025-12-05 09:57:53.752598611 +0000 UTC m=+0.216882274 container start 76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mahavira, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:57:53 localhost podman[297445]: 2025-12-05 09:57:53.752771347 +0000 UTC m=+0.217055070 container attach 76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mahavira, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:57:53 localhost podman[297445]: 2025-12-05 09:57:53.754994606 +0000 UTC m=+0.219278289 container died 76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mahavira, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:57:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:53 localhost podman[297465]: 2025-12-05 09:57:53.943314127 +0000 UTC m=+0.181543904 container remove 76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mahavira, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:57:53 localhost systemd[1]: libpod-conmon-76febbc5ec790959aae1407a4e4e48423708e0d5f952fe6b117109bbeba23f6b.scope: Deactivated successfully. Dec 5 04:57:54 localhost ceph-mon[292820]: Removed host np0005546416.localdomain Dec 5 04:57:54 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:54 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:54 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:57:54 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:54 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:54 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:57:54 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:57:54 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:57:54 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:57:54 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:57:54 localhost systemd[1]: var-lib-containers-storage-overlay-303ac298e6c4836b4b2005392de2c836bfa8b9dc8a0688718739826d6caa64af-merged.mount: Deactivated successfully. Dec 5 04:57:54 localhost podman[297533]: Dec 5 04:57:54 localhost podman[297533]: 2025-12-05 09:57:54.683091548 +0000 UTC m=+0.067291818 container create a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_hertz, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1763362218, architecture=x86_64, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public) Dec 5 04:57:54 localhost systemd[1]: Started libpod-conmon-a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7.scope. Dec 5 04:57:54 localhost systemd[1]: Started libcrun container. Dec 5 04:57:54 localhost podman[297533]: 2025-12-05 09:57:54.741763579 +0000 UTC m=+0.125963849 container init a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_hertz, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1763362218, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git) Dec 5 04:57:54 localhost podman[297533]: 2025-12-05 09:57:54.750607862 +0000 UTC m=+0.134808122 container start a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_hertz, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True) Dec 5 04:57:54 localhost amazing_hertz[297548]: 167 167 Dec 5 04:57:54 localhost podman[297533]: 2025-12-05 09:57:54.751570952 +0000 UTC m=+0.135771262 container attach a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_hertz, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7) Dec 5 04:57:54 localhost systemd[1]: libpod-a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7.scope: Deactivated successfully. Dec 5 04:57:54 localhost podman[297533]: 2025-12-05 09:57:54.753951616 +0000 UTC m=+0.138151926 container died a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_hertz, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 5 04:57:54 localhost podman[297533]: 2025-12-05 09:57:54.659907093 +0000 UTC m=+0.044107353 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:57:54 localhost ceph-mon[292820]: mon.np0005546419@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:54 localhost podman[297553]: 2025-12-05 09:57:54.846721329 +0000 UTC m=+0.082867169 container remove a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_hertz, architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Dec 5 04:57:54 localhost systemd[1]: libpod-conmon-a032586dc806f4fee64149fa87e9e53ced780237b2c8ad039c0221f64569a3e7.scope: Deactivated successfully. Dec 5 04:57:54 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:57:54 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:57:54 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:57:54 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:55 localhost ceph-mon[292820]: Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:57:55 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:55 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:57:55 localhost nova_compute[280228]: 2025-12-05 09:57:55.076 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:55 localhost systemd[1]: var-lib-containers-storage-overlay-340c0e740ceab419039b0f624a0b547c007ae7f94444be147ee2aed36ef06243-merged.mount: Deactivated successfully. Dec 5 04:57:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:55 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 5 04:57:55 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 5 04:57:55 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:57:55 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:57:56 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:57:56 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:57:56 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:56 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:57:56 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:56 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 5 04:57:56 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 5 04:57:56 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:57:56 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:57:57 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:57:57 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:57:57 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:57 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:57:57 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:57 localhost openstack_network_exporter[241668]: ERROR 09:57:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:57:57 localhost openstack_network_exporter[241668]: ERROR 09:57:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:57:57 localhost openstack_network_exporter[241668]: ERROR 09:57:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:57:57 localhost openstack_network_exporter[241668]: ERROR 09:57:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:57:57 localhost openstack_network_exporter[241668]: Dec 5 04:57:57 localhost openstack_network_exporter[241668]: ERROR 09:57:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:57:57 localhost openstack_network_exporter[241668]: Dec 5 04:57:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:57 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:57:57 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:57:57 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:57:57 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:57:57 localhost nova_compute[280228]: 2025-12-05 09:57:57.963 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:57:58 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:57:58 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:57:58 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:58 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:58 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:58 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:57:58 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:57:58 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:57:58 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:57:58 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:57:59 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:57:59 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:57:59 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:59 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:59 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:57:59 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:57:59 localhost ceph-mon[292820]: mon.np0005546419@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:57:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:57:59 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:57:59 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:57:59 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:57:59 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:58:00 localhost nova_compute[280228]: 2025-12-05 09:58:00.079 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:00 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:58:00 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:58:00 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:00 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:58:00 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:00 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:58:00 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:58:00 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:58:00 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:58:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.44295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:58:00 localhost ceph-mgr[286454]: [cephadm INFO root] Saving service mon spec with placement label:mon Dec 5 04:58:00 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Dec 5 04:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:58:01 localhost podman[297571]: 2025-12-05 09:58:01.207214625 +0000 UTC m=+0.087364317 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:58:01 localhost podman[297571]: 2025-12-05 09:58:01.218457932 +0000 UTC m=+0.098607624 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 04:58:01 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:58:01 localhost ceph-mon[292820]: Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:58:01 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:58:01 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:01 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:58:01 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:01 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:01 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:58:01 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:01 localhost ceph-mon[292820]: Saving service mon spec with placement label:mon Dec 5 04:58:01 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:01 localhost podman[297570]: 2025-12-05 09:58:01.308882042 +0000 UTC m=+0.190956774 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 04:58:01 localhost podman[297570]: 2025-12-05 09:58:01.417861496 +0000 UTC m=+0.299936218 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:58:01 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:58:01 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 5 04:58:01 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 5 04:58:01 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:01 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005546420", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 5 04:58:02 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:02 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:58:02 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:58:02 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:02 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:02 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 3beb3b06-0acc-4c91-8ad6-7d6b311cb82c (Updating node-proxy deployment (+4 -> 4)) Dec 5 04:58:02 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 3beb3b06-0acc-4c91-8ad6-7d6b311cb82c (Updating node-proxy deployment (+4 -> 4)) Dec 5 04:58:02 localhost ceph-mgr[286454]: [progress INFO root] Completed event 3beb3b06-0acc-4c91-8ad6-7d6b311cb82c (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 5 04:58:02 localhost nova_compute[280228]: 2025-12-05 09:58:02.965 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:03 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:03 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:03 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:03 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26964 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005546420"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO root] Remove daemons mon.np0005546420 Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005546420 Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005546420: new quorum should be ['np0005546418', 'np0005546421', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546419']) Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005546420: new quorum should be ['np0005546418', 'np0005546421', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546419']) Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005546420 from monmap... Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing monitor np0005546420 from monmap... Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005546420 from np0005546420.localdomain -- ports [] Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005546420 from np0005546420.localdomain -- ports [] Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@3(peon) e11 my rank is now 2 (was 3) Dec 5 04:58:03 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:58:03 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546418"} v 0) Dec 5 04:58:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546418"} : dispatch Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0) Dec 5 04:58:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch Dec 5 04:58:03 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:58:03 localhost ceph-mon[292820]: paxos.2).electionLogic(50) init, last seen epoch 50 Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0) Dec 5 04:58:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:58:03 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 04:58:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:58:03.905 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:58:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:58:03.905 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:58:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:58:03.906 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:58:04 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:04 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:04 localhost ceph-mon[292820]: Remove daemons mon.np0005546420 Dec 5 04:58:04 localhost ceph-mon[292820]: Safe to remove mon.np0005546420: new quorum should be ['np0005546418', 'np0005546421', 'np0005546419'] (from ['np0005546418', 'np0005546421', 'np0005546419']) Dec 5 04:58:04 localhost ceph-mon[292820]: Removing monitor np0005546420 from monmap... Dec 5 04:58:04 localhost ceph-mon[292820]: Removing daemon mon.np0005546420 from np0005546420.localdomain -- ports [] Dec 5 04:58:04 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:58:04 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:58:04 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:58:04 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419 in quorum (ranks 0,1,2) Dec 5 04:58:04 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:04 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:04 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:04 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:04 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:04 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:58:04 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 04:58:04 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 04:58:05 localhost nova_compute[280228]: 2025-12-05 09:58:05.081 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:05 localhost nova_compute[280228]: 2025-12-05 09:58:05.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 04:58:05 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev bf8da9a6-0fa2-4875-b0d4-a40d4a3c8fb8 (Updating node-proxy deployment (+4 -> 4)) Dec 5 04:58:05 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev bf8da9a6-0fa2-4875-b0d4-a40d4a3c8fb8 (Updating node-proxy deployment (+4 -> 4)) Dec 5 04:58:05 localhost ceph-mgr[286454]: [progress INFO root] Completed event bf8da9a6-0fa2-4875-b0d4-a40d4a3c8fb8 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 04:58:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:05 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:05 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:05 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:05 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:05 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:58:05 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch Dec 5 04:58:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:05 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:05 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:58:05 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:58:06 localhost nova_compute[280228]: 2025-12-05 09:58:06.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:06 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:58:06 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:06 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:58:06 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546418.garyvl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:06 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0) Dec 5 04:58:07 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0) Dec 5 04:58:07 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:58:07 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:58:07 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:58:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:07 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:07 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:07 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:58:07 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.502 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.523 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.523 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.524 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.524 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.524 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:58:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:07 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 04:58:07 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2722350363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.968 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:58:07 localhost nova_compute[280228]: 2025-12-05 09:58:07.969 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:07 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain.devices.0}] v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546418.localdomain}] v 0) Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.020 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.020 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:58:08 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:58:08 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:08 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:58:08 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:08 localhost ceph-mon[292820]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:08 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:08 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.253 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.254 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11742MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.255 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.256 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.323 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.324 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.324 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.369 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:58:08 localhost podman[298066]: Dec 5 04:58:08 localhost podman[298066]: 2025-12-05 09:58:08.681674911 +0000 UTC m=+0.090491103 container create ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_pascal, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main) Dec 5 04:58:08 localhost systemd[1]: Started libpod-conmon-ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2.scope. Dec 5 04:58:08 localhost systemd[1]: Started libcrun container. Dec 5 04:58:08 localhost podman[298066]: 2025-12-05 09:58:08.647731844 +0000 UTC m=+0.056548056 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:08 localhost podman[298066]: 2025-12-05 09:58:08.75227929 +0000 UTC m=+0.161095482 container init ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_pascal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:08 localhost systemd[1]: tmp-crun.5K3PVa.mount: Deactivated successfully. Dec 5 04:58:08 localhost podman[298066]: 2025-12-05 09:58:08.766924833 +0000 UTC m=+0.175741015 container start ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_pascal, GIT_BRANCH=main, version=7, architecture=x86_64, distribution-scope=public, release=1763362218, io.buildah.version=1.41.4, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:58:08 localhost podman[298066]: 2025-12-05 09:58:08.767437838 +0000 UTC m=+0.176254020 container attach ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_pascal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Dec 5 04:58:08 localhost suspicious_pascal[298081]: 167 167 Dec 5 04:58:08 localhost systemd[1]: libpod-ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2.scope: Deactivated successfully. Dec 5 04:58:08 localhost podman[298066]: 2025-12-05 09:58:08.771014318 +0000 UTC m=+0.179830560 container died ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_pascal, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1198660387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.849 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.859 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:58:08 localhost podman[298086]: 2025-12-05 09:58:08.866520606 +0000 UTC m=+0.085098777 container remove ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_pascal, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, release=1763362218, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Dec 5 04:58:08 localhost systemd[1]: libpod-conmon-ea32add3771793952df78fbdb4f5f9ae218e8dffdd52ac93184bb3d46ebb06b2.scope: Deactivated successfully. Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.884 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.886 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:58:08 localhost nova_compute[280228]: 2025-12-05 09:58:08.886 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:08 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 5 04:58:08 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:58:08 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:08 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:08 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:58:08 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:58:09 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:58:09 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:58:09 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:09 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:58:09 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:09 localhost podman[298159]: Dec 5 04:58:09 localhost podman[298159]: 2025-12-05 09:58:09.630339949 +0000 UTC m=+0.078548675 container create de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1763362218, RELEASE=main, maintainer=Guillaume Abrioux ) Dec 5 04:58:09 localhost systemd[1]: Started libpod-conmon-de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275.scope. Dec 5 04:58:09 localhost systemd[1]: var-lib-containers-storage-overlay-623fefbac9cf85bc83a409ca7f480d3e2e6b3b6ca37e360695e966d47cbabee8-merged.mount: Deactivated successfully. Dec 5 04:58:09 localhost podman[298159]: 2025-12-05 09:58:09.598117804 +0000 UTC m=+0.046326540 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:09 localhost systemd[1]: Started libcrun container. Dec 5 04:58:09 localhost podman[298159]: 2025-12-05 09:58:09.723508124 +0000 UTC m=+0.171716830 container init de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:09 localhost podman[298159]: 2025-12-05 09:58:09.735392841 +0000 UTC m=+0.183601547 container start de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, io.buildah.version=1.41.4, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=) Dec 5 04:58:09 localhost podman[298159]: 2025-12-05 09:58:09.735606168 +0000 UTC m=+0.183814874 container attach de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:58:09 localhost admiring_ride[298174]: 167 167 Dec 5 04:58:09 localhost systemd[1]: libpod-de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275.scope: Deactivated successfully. Dec 5 04:58:09 localhost podman[298159]: 2025-12-05 09:58:09.739370434 +0000 UTC m=+0.187579170 container died de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4) Dec 5 04:58:09 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:09 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_09:58:09 Dec 5 04:58:09 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 04:58:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:09 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 04:58:09 localhost ceph-mgr[286454]: [balancer INFO root] pools ['images', 'manila_metadata', '.mgr', 'backups', 'vms', 'volumes', 'manila_data'] Dec 5 04:58:09 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 04:58:09 localhost podman[298179]: 2025-12-05 09:58:09.83839051 +0000 UTC m=+0.089960427 container remove de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, distribution-scope=public) Dec 5 04:58:09 localhost systemd[1]: libpod-conmon-de97ab494f9e2cb730376403895010040543a0c104e363f83896de969b1d0275.scope: Deactivated successfully. Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.887 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.888 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.889 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 04:58:09 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Dec 5 04:58:09 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:58:09 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:58:09 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:58:09 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:58:09 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 04:58:09 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 04:58:09 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:58:09 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:58:09 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 04:58:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 04:58:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 04:58:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 04:58:09 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.955 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.955 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.956 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:58:09 localhost nova_compute[280228]: 2025-12-05 09:58:09.956 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:58:09 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 04:58:10 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 04:58:10 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 04:58:10 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 04:58:10 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 04:58:10 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:10 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:10 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 5 04:58:10 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 5 04:58:10 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Dec 5 04:58:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:58:10 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:10 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:58:10 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.158 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:10 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:58:10 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:58:10 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:10 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:10 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:58:10 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:10 localhost podman[298256]: Dec 5 04:58:10 localhost podman[298256]: 2025-12-05 09:58:10.653776144 +0000 UTC m=+0.080289058 container create ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_hellman, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, ceph=True, io.buildah.version=1.41.4, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.681 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:58:10 localhost systemd[1]: Started libpod-conmon-ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c.scope. Dec 5 04:58:10 localhost systemd[1]: var-lib-containers-storage-overlay-ef59733c3e04ca312fc26d96deb31c444581a1f8c8b5f01ef54ca338b6d4b5eb-merged.mount: Deactivated successfully. Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.701 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.702 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.703 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.703 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:10 localhost nova_compute[280228]: 2025-12-05 09:58:10.704 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:10 localhost systemd[1]: Started libcrun container. Dec 5 04:58:10 localhost podman[298256]: 2025-12-05 09:58:10.617910807 +0000 UTC m=+0.044423751 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:10 localhost podman[298256]: 2025-12-05 09:58:10.720548825 +0000 UTC m=+0.147061739 container init ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_hellman, GIT_CLEAN=True, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 5 04:58:10 localhost systemd[1]: tmp-crun.VGrGSE.mount: Deactivated successfully. Dec 5 04:58:10 localhost podman[298256]: 2025-12-05 09:58:10.73430959 +0000 UTC m=+0.160822504 container start ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_hellman, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:10 localhost podman[298256]: 2025-12-05 09:58:10.734621819 +0000 UTC m=+0.161134743 container attach ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_hellman, build-date=2025-11-26T19:44:28Z, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 5 04:58:10 localhost silly_hellman[298271]: 167 167 Dec 5 04:58:10 localhost systemd[1]: libpod-ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c.scope: Deactivated successfully. Dec 5 04:58:10 localhost podman[298256]: 2025-12-05 09:58:10.739115868 +0000 UTC m=+0.165628852 container died ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_hellman, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:58:10 localhost podman[298276]: 2025-12-05 09:58:10.84998456 +0000 UTC m=+0.098762019 container remove ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_hellman, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True) Dec 5 04:58:10 localhost systemd[1]: libpod-conmon-ac3f70bc56f5ddaeed2da7bad44e7c53067628cb4a09b87d727df7bcad48088c.scope: Deactivated successfully. Dec 5 04:58:11 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:11 localhost systemd[1]: var-lib-containers-storage-overlay-d3c11dff4e3206f98a207e4bee98b86ab9bbd8d1f730b3bc4e8fac90d0864af7-merged.mount: Deactivated successfully. Dec 5 04:58:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:11 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:58:11 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:58:11 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:11 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:58:11 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:58:11 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 04:58:11 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:11 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:11 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:58:11 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:58:12 localhost nova_compute[280228]: 2025-12-05 09:58:12.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:58:12 localhost nova_compute[280228]: 2025-12-05 09:58:12.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:58:12 localhost podman[298351]: Dec 5 04:58:12 localhost podman[298351]: 2025-12-05 09:58:12.581310191 +0000 UTC m=+0.077878693 container create 6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_hugle, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:58:12 localhost systemd[1]: Started libpod-conmon-6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853.scope. Dec 5 04:58:12 localhost systemd[1]: Started libcrun container. Dec 5 04:58:12 localhost podman[298351]: 2025-12-05 09:58:12.648439733 +0000 UTC m=+0.145008235 container init 6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_hugle, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:12 localhost podman[298351]: 2025-12-05 09:58:12.550126439 +0000 UTC m=+0.046694971 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:12 localhost systemd[1]: tmp-crun.PukRv0.mount: Deactivated successfully. Dec 5 04:58:12 localhost priceless_hugle[298366]: 167 167 Dec 5 04:58:12 localhost systemd[1]: libpod-6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853.scope: Deactivated successfully. Dec 5 04:58:12 localhost podman[298351]: 2025-12-05 09:58:12.665395606 +0000 UTC m=+0.161964108 container start 6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_hugle, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, release=1763362218, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Dec 5 04:58:12 localhost podman[298351]: 2025-12-05 09:58:12.665684845 +0000 UTC m=+0.162253397 container attach 6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_hugle, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z) Dec 5 04:58:12 localhost podman[298351]: 2025-12-05 09:58:12.668498412 +0000 UTC m=+0.165066964 container died 6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_hugle, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git) Dec 5 04:58:12 localhost systemd[1]: var-lib-containers-storage-overlay-af5aa08dedb19f0b5a505fbf446207fb3cddf85b5826c7cc864ae05ed61f7b11-merged.mount: Deactivated successfully. Dec 5 04:58:12 localhost podman[298371]: 2025-12-05 09:58:12.772487412 +0000 UTC m=+0.100116881 container remove 6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_hugle, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64) Dec 5 04:58:12 localhost systemd[1]: libpod-conmon-6300a1729aa770f15d997f1ba726f6c6b5339654a5c10643412f03cc0a062853.scope: Deactivated successfully. Dec 5 04:58:12 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:12 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.949 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.955 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cce7ee35-97fc-496f-864f-c91f9934a260', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:12.950792', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e97b0502-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': 'f3fc506bf9411088db1a95cbf3178677a7af8f201f7b839a04b84a277a9d3281'}]}, 'timestamp': '2025-12-05 09:58:12.956789', '_unique_id': '21e12d43099c4e3dacad51f4977d3a9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.959 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.959 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd68a8206-52e4-4894-916a-d0e507c952e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:12.959923', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e97b94ea-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': '1128e3806af44ffa26bd0d01232b70ada1a36dee04ec16c6ed5b1f4269fed5d6'}]}, 'timestamp': '2025-12-05 09:58:12.960481', '_unique_id': '59729c6531ec4d00a411d1be8e79cf34'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:12.962 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 04:58:13 localhost nova_compute[280228]: 2025-12-05 09:58:13.014 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd3cad40e-2644-4574-9658-508a90e354ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:12.962823', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e9847e8e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.137142251, 'message_signature': '9a8d60dea77d9af23b12e56e23d910151f081569d0bf1177ad23d3b6733c8d34'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:12.962823', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e98494aa-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.137142251, 'message_signature': 'ea4d298f0a33006812b842d4182e6a2003a05be6bca3fc893b7fc365952d0c72'}]}, 'timestamp': '2025-12-05 09:58:13.019469', '_unique_id': '06225887ae6e408daa5f9e680bc08863'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.022 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a61e2d6d-f13c-4bd6-8d9e-6f61edf6f5e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.022494', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e98525aa-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': 'f4b110eb5785a278685f79175d3110a4f186d77eeb99cedfac5498a104418f8a'}]}, 'timestamp': '2025-12-05 09:58:13.023224', '_unique_id': '7329abb3da2f4cee81f9335282ba32cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.026 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 04:58:13 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:58:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97bd68b2-34b3-4092-a8db-77e8a23e8999', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.026475', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e98a6696-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'ac2027e5e35b742344814d1a7ca6f3dfb729b1da5b331b9797da6cd3ee7647b9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.026475', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e98a7f8c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'f678daaa6d8358c3620aa8fbbb73d7f13d788efcaa6f63f0a80aeec36c39e308'}]}, 'timestamp': '2025-12-05 09:58:13.058218', '_unique_id': '652b7ce6002f4363834852eef2cb03bd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.060 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '09320e72-f068-467c-8770-c10b2dfbc94d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.061099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e98b0a4c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'b3bb847dcc9ca6a90bd9c57957939c4cf44400cae0f7c0ab0418654b739e46d0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.061099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e98b23ce-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'd87554df44e35f248825592d0882b2f700917693ef46997d2e24439984331622'}]}, 'timestamp': '2025-12-05 09:58:13.062496', '_unique_id': '34be97568ede4361b90b8e1658f8736d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.065 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.065 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '17dae957-0b66-4b30-a0d2-c9a3bf78a10a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.065760', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e98bbfb4-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': 'fd6008c4852becfed9671b07c7b9df98904c1ea999dd9b5cd2fceb5b322a39d2'}]}, 'timestamp': '2025-12-05 09:58:13.066511', '_unique_id': 'a9b12b1e207e4e518f2c02edb91978bd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.069 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '319bce9f-a0f1-45aa-9a68-dc914bba053b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.069746', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e98c5ac8-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'de7a0d8ff40bbbfe08f5843958ae60e8ee41752e906536b5a0551f8a77523d79'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.069746', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e98c74ae-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': '99908dc880eb134526a96b4ce4099f98802651368ce5b6e6d3a165d9499529de'}]}, 'timestamp': '2025-12-05 09:58:13.071072', '_unique_id': '119507366e9a453abbfdf0fb4540b976'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.074 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:58:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:13 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:58:13 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:13 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:58:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:13 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.090 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 5 04:58:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5c5e2599-221e-4d70-a029-f3b3d220c564', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:58:13.074638', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'e98f866c-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.264781, 'message_signature': '66a5d12b08f6339791506210702f25606dc646eae3b35623df972bfc3ceca8f0'}]}, 'timestamp': '2025-12-05 09:58:13.091201', '_unique_id': 'd6cb5452aed74686b9025cb43b32a381'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.093 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.094 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1fbafa8-1ed0-4487-8a7a-20b9ac8fe56d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.094146', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e99015f0-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': '0646fee3252966f73192d7d9adb03ecbc4552aee8ff2ec7eef405a73008a9c6a'}]}, 'timestamp': '2025-12-05 09:58:13.094899', '_unique_id': '57438c6c22fb4e62864b127e61b2ac80'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.097 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.098 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2cec602c-26e2-4de6-8d9f-76e9d525c7ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.098070', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e990aeb6-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': 'dfd38e849d3ee8edcb0808f12d0be7198edbd7a266dcd81c4ac47d6dda85c519'}]}, 'timestamp': '2025-12-05 09:58:13.098819', '_unique_id': 'c74c5ea2643d4bd3a74fa78af71684cf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.100 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.101 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.102 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.102 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62be00ac-13b2-4a89-bec9-8ae3686eb3d6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.102117', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e9914c9a-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.137142251, 'message_signature': '9c6a46c6c75ad808fd3c01eb5bdae4d0653915f0c6f441929054acfd942cf6bf'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.102117', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e9916536-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.137142251, 'message_signature': 'dcc5b27e06c81fb91ca4258afe944bae6585f14df96a6693c94fab757b3d353a'}]}, 'timestamp': '2025-12-05 09:58:13.103486', '_unique_id': '43cab07f75d340eba57d54f06e691bc4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.104 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.106 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.106 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.107 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '28a1fb18-a350-478f-9724-6d214b355682', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.106695', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e991fdf2-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': '1082cb38b3a210d36f58d488add3af64e7d823fa1697df9758d41a4ffc9b6da6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.106695', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e992177e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': '19b532bdaae7896ab0a5782077291c9a1e98639f9c6e1d39c6cc6c3a7ed914b9'}]}, 'timestamp': '2025-12-05 09:58:13.108010', '_unique_id': '6432283593e641c89849c24903ea596a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.109 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.110 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.111 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.111 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '11e83472-5d41-43ad-85c9-2257b0150915', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.111188', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e992af22-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'c748bee5544ff6544f5aff49b87908c28bd89e0d0bfd89658f7561e5fee6b465'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.111188', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e992c71e-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': 'a498156bd73aa75eea8e653d6f664425ce06e358f3f56837a928c08b89ddb0e9'}]}, 'timestamp': '2025-12-05 09:58:13.112539', '_unique_id': '1c94e9429e4244c6871fc6d1baa11fa0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.113 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.115 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.116 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 13050000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5be5be2b-b17d-4838-aec7-9ee10570ce42', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13050000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T09:58:13.116108', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'e9936eee-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.264781, 'message_signature': '04f28dd801b3d845403762186405648b7b4bb5b0948ff021c56d6b027e42550f'}]}, 'timestamp': '2025-12-05 09:58:13.116814', '_unique_id': '22cd370fdc764322bd09220206e6f307'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.118 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.119 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.119 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.119 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '19bc1acd-21b0-4e62-9939-72c6d9ee085c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.119144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e993df82-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': '7d56f931138c26e000ef7b3d41528d547ddcf9a7afbc503d2b51e4cae5aa85c1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.119144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e993ee64-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.200830746, 'message_signature': '50a6f5d5f52d9df805a4aff7f6f7ebfbab5abe52386d2dedccb19ae343225494'}]}, 'timestamp': '2025-12-05 09:58:13.119945', '_unique_id': 'd174cd6d2fd84465ab2ed44b246d8b3d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.120 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.121 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.121 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.122 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.122 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec4a8d25-14a5-4b03-ad9a-ed425ded9c7b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.122196', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e9945746-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': 'e65392091e8b02d0b48a05c6d2c83beb8c10d9d1039af89fba2960fe494aa83a'}]}, 'timestamp': '2025-12-05 09:58:13.122654', '_unique_id': 'cbb27d92e7d147e38b5418547e7aa1d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.123 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.124 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.124 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d988429-d626-415b-bbf3-2e31b9ecfc31', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.124618', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e994b470-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': 'd52feee572e4d27ed05cca709e5c07ee3f0f6ab4690faa8e561c7e97e7c0e0e5'}]}, 'timestamp': '2025-12-05 09:58:13.125044', '_unique_id': 'bc63ea828d464cb29b2bcbd42209931c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.125 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.126 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.126 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.126 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cf8eaace-cf20-4664-9000-f68c9435b07c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T09:58:13.126617', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e994ffde-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.137142251, 'message_signature': '2d6428c4ffe38cc5bcb1b000bb44c53324be4bdebd9a09d6fe5195bb88f1652a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T09:58:13.126617', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e9950d26-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.137142251, 'message_signature': '2cc4ef0a2d8008fb5e8fb10ae7c6a4fdfa15a2f0abd6a974285532dea5941ef5'}]}, 'timestamp': '2025-12-05 09:58:13.127269', '_unique_id': '925c4fab633d47a98d4052d3e2257382'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.127 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.128 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.128 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c15194ea-658d-4afa-8d6a-96eb06d899d2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.128602', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e9954d90-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': '3894b8f654751423bb6283c76bcbf8e86eb9467b405f890c3f09f7c156e20a48'}]}, 'timestamp': '2025-12-05 09:58:13.128889', '_unique_id': 'bb82871a2f534e52bab7e498cf954d8e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.129 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.130 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.130 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b20fa7f9-047b-4851-82fe-5f798ef6824a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T09:58:13.130182', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'e9958c38-d1c0-11f0-8ba6-fa163e982365', 'monotonic_time': 11808.125096169, 'message_signature': '280e2d0d2856bb8ba30d68e17cc6f236c8559474d0d00179546c93ea7e00e35c'}]}, 'timestamp': '2025-12-05 09:58:13.130492', '_unique_id': '0a66709921df4f298342e0a538db65ef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging yield Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 ERROR oslo_messaging.notify.messaging Dec 5 04:58:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 09:58:13.131 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 04:58:13 localhost podman[298441]: Dec 5 04:58:13 localhost podman[298441]: 2025-12-05 09:58:13.661599212 +0000 UTC m=+0.079431882 container create 68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_shockley, name=rhceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:13 localhost systemd[1]: Started libpod-conmon-68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27.scope. Dec 5 04:58:13 localhost systemd[1]: Started libcrun container. Dec 5 04:58:13 localhost podman[298441]: 2025-12-05 09:58:13.726715671 +0000 UTC m=+0.144548341 container init 68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_shockley, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, release=1763362218, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:58:13 localhost podman[298441]: 2025-12-05 09:58:13.627077457 +0000 UTC m=+0.044910147 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:13 localhost podman[298441]: 2025-12-05 09:58:13.736093151 +0000 UTC m=+0.153925821 container start 68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_shockley, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container) Dec 5 04:58:13 localhost podman[298441]: 2025-12-05 09:58:13.736429801 +0000 UTC m=+0.154262571 container attach 68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_shockley, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git) Dec 5 04:58:13 localhost dreamy_shockley[298457]: 167 167 Dec 5 04:58:13 localhost systemd[1]: libpod-68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27.scope: Deactivated successfully. Dec 5 04:58:13 localhost podman[298441]: 2025-12-05 09:58:13.73930183 +0000 UTC m=+0.157134500 container died 68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_shockley, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Dec 5 04:58:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:13 localhost systemd[1]: var-lib-containers-storage-overlay-62c49bd904b97a4eb04d231756d5df592996dee8e8d19dcc680115750a958724-merged.mount: Deactivated successfully. Dec 5 04:58:13 localhost podman[298462]: 2025-12-05 09:58:13.863482962 +0000 UTC m=+0.112706779 container remove 68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_shockley, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Dec 5 04:58:13 localhost systemd[1]: libpod-conmon-68e786b30848874f13c50f7268599fe7ec610470baa237ff0c2704c9062ace27.scope: Deactivated successfully. Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:13 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:58:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:58:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:13 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:13 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:58:13 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:14 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:14 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:14 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:14 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:14 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:14 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 5 04:58:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 5 04:58:14 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 5 04:58:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:58:14 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:14 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:58:14 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:58:15 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:58:15 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:58:15 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:15 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:58:15 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:15 localhost nova_compute[280228]: 2025-12-05 09:58:15.157 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:15 localhost podman[298478]: 2025-12-05 09:58:15.21263823 +0000 UTC m=+0.095463067 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:58:15 localhost podman[298478]: 2025-12-05 09:58:15.228637763 +0000 UTC m=+0.111462600 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container) Dec 5 04:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:58:15 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:58:15 localhost systemd[1]: tmp-crun.J0pMZa.mount: Deactivated successfully. Dec 5 04:58:15 localhost podman[298498]: 2025-12-05 09:58:15.349971328 +0000 UTC m=+0.093388493 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 04:58:15 localhost podman[298498]: 2025-12-05 09:58:15.36558893 +0000 UTC m=+0.109006095 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 5 04:58:15 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:58:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:15 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:16 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:16 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 5 04:58:16 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 5 04:58:16 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 5 04:58:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:58:16 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:16 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:16 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:58:16 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:58:16 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:58:16 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:58:16 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:16 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:16 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:58:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.26994 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005546420.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:58:16 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 5 04:58:17 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 5 04:58:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:58:17 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:17 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:58:17 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:58:17 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:17 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:17 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:58:17 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:58:17 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 04:58:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:17 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:17 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:58:17 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:58:17 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:58:17 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:58:17 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:58:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:17 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:17 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:18 localhost nova_compute[280228]: 2025-12-05 09:58:18.077 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:18 localhost ceph-mon[292820]: Deploying daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:58:18 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:58:18 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:58:19 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:19 localhost podman[239519]: time="2025-12-05T09:58:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:58:19 localhost podman[239519]: @ - - [05/Dec/2025:09:58:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:58:19 localhost podman[239519]: @ - - [05/Dec/2025:09:58:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18727 "" "Go-http-client/1.1" Dec 5 04:58:19 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:19 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:19 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:19 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:20 localhost nova_compute[280228]: 2025-12-05 09:58:20.160 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:20 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:58:20 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:58:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 5 04:58:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:20 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:58:20 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:20 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:20 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:20 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:20 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:20 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:21 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:21 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:21 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:58:21 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:58:21 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:58:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:21 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:21 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:58:21 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:58:21 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:21 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:21 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:22 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:58:22 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:58:22 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:22 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:22 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:22 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:58:22 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:22 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:22 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:22 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:22 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:22 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:22 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 5 04:58:22 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 5 04:58:22 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 5 04:58:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:58:22 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:22 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:22 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:23 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:58:23 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:58:23 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:23 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:58:23 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:23 localhost nova_compute[280228]: 2025-12-05 09:58:23.079 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:58:23 localhost podman[298519]: 2025-12-05 09:58:23.207667462 +0000 UTC m=+0.080136915 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:58:23 localhost podman[298519]: 2025-12-05 09:58:23.215615776 +0000 UTC m=+0.088085189 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:58:23 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:58:23 localhost podman[298520]: 2025-12-05 09:58:23.258003495 +0000 UTC m=+0.127836096 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 04:58:23 localhost podman[298520]: 2025-12-05 09:58:23.268641993 +0000 UTC m=+0.138474624 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 04:58:23 localhost systemd[1]: tmp-crun.GiLUDg.mount: Deactivated successfully. Dec 5 04:58:23 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:58:23 localhost podman[298521]: 2025-12-05 09:58:23.289817567 +0000 UTC m=+0.155423068 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:58:23 localhost podman[298521]: 2025-12-05 09:58:23.323122155 +0000 UTC m=+0.188727706 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 04:58:23 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:58:23 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:23 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:23 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:23 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:23 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:23 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 5 04:58:23 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 5 04:58:23 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 5 04:58:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:58:23 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:23 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:58:23 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:58:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:24 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:58:24 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:24 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:24 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:58:24 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:24 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:24 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:24 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:58:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 04:58:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:24 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:24 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:58:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:58:25 localhost ceph-mon[292820]: Reconfiguring osd.5 (monmap changed)... Dec 5 04:58:25 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:58:25 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:25 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:25 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:25 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:58:25 localhost nova_compute[280228]: 2025-12-05 09:58:25.163 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:25 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:25 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:25 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:25 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:25 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:25 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:25 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:58:25 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:58:25 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:58:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:25 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 5 04:58:25 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr services"} : dispatch Dec 5 04:58:25 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:25 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:25 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:58:25 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:58:26 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:58:26 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:58:26 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:26 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:26 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:26 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:58:26 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:26 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:26 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:26 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:26 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:26 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:26 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:27 localhost openstack_network_exporter[241668]: ERROR 09:58:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:58:27 localhost openstack_network_exporter[241668]: ERROR 09:58:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:58:27 localhost openstack_network_exporter[241668]: ERROR 09:58:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:58:27 localhost openstack_network_exporter[241668]: ERROR 09:58:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:58:27 localhost openstack_network_exporter[241668]: Dec 5 04:58:27 localhost openstack_network_exporter[241668]: ERROR 09:58:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:58:27 localhost openstack_network_exporter[241668]: Dec 5 04:58:27 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:58:27 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:58:27 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:27 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:27 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:27 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:27 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:27 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:28 localhost nova_compute[280228]: 2025-12-05 09:58:28.081 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:28 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:28 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:28 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:28 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:28 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:28 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:28 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:28 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:29 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:29 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 04:58:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 04:58:29 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 04:58:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:29 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 04:58:29 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev a2d90a6c-c2ab-4160-8820-8cb1ba5a01f7 (Updating node-proxy deployment (+4 -> 4)) Dec 5 04:58:29 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev a2d90a6c-c2ab-4160-8820-8cb1ba5a01f7 (Updating node-proxy deployment (+4 -> 4)) Dec 5 04:58:29 localhost ceph-mgr[286454]: [progress INFO root] Completed event a2d90a6c-c2ab-4160-8820-8cb1ba5a01f7 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 04:58:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:29 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 04:58:29 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 04:58:30 localhost nova_compute[280228]: 2025-12-05 09:58:30.170 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:30 localhost ceph-mon[292820]: from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:30 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:30 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:30 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:30 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:30 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:30 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:30 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.920829) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928710920898, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2471, "num_deletes": 258, "total_data_size": 4499239, "memory_usage": 4568176, "flush_reason": "Manual Compaction"} Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928710938240, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 2527595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12503, "largest_seqno": 14969, "table_properties": {"data_size": 2517500, "index_size": 6088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 25917, "raw_average_key_size": 22, "raw_value_size": 2495220, "raw_average_value_size": 2125, "num_data_blocks": 267, "num_entries": 1174, "num_filter_entries": 1174, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928652, "oldest_key_time": 1764928652, "file_creation_time": 1764928710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 17512 microseconds, and 6833 cpu microseconds. Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.938345) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 2527595 bytes OK Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.938377) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.940076) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.940098) EVENT_LOG_v1 {"time_micros": 1764928710940091, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.940123) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 4487240, prev total WAL file size 4487564, number of live WAL files 2. Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.941157) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353131' seq:72057594037927935, type:22 .. '6C6F676D0033373634' seq:0, type:0; will stop at (end) Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(2468KB)], [18(15MB)] Dec 5 04:58:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928710941204, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18289933, "oldest_snapshot_seqno": -1} Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10638 keys, 18140518 bytes, temperature: kUnknown Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928711271319, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 18140518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18077489, "index_size": 35344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26629, "raw_key_size": 284784, "raw_average_key_size": 26, "raw_value_size": 17893517, "raw_average_value_size": 1682, "num_data_blocks": 1356, "num_entries": 10638, "num_filter_entries": 10638, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.272344) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 18140518 bytes Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.319854) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.4 rd, 54.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 15.0 +0.0 blob) out(17.3 +0.0 blob), read-write-amplify(14.4) write-amplify(7.2) OK, records in: 11191, records dropped: 553 output_compression: NoCompression Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.319909) EVENT_LOG_v1 {"time_micros": 1764928711319887, "job": 8, "event": "compaction_finished", "compaction_time_micros": 330214, "compaction_time_cpu_micros": 37644, "output_level": 6, "num_output_files": 1, "total_output_size": 18140518, "num_input_records": 11191, "num_output_records": 10638, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928711320993, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928711323661, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:30.941089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.323811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.323821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.323825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.323829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:31.323833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:31 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:31 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:31 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:31 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:58:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:58:32 localhost podman[298663]: 2025-12-05 09:58:32.202400056 +0000 UTC m=+0.077209693 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:58:32 localhost podman[298663]: 2025-12-05 09:58:32.236787948 +0000 UTC m=+0.111597595 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 04:58:32 localhost systemd[1]: tmp-crun.vBSvQR.mount: Deactivated successfully. Dec 5 04:58:32 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:58:32 localhost podman[298662]: 2025-12-05 09:58:32.266095322 +0000 UTC m=+0.144065547 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 04:58:32 localhost podman[298662]: 2025-12-05 09:58:32.300854525 +0000 UTC m=+0.178824700 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 5 04:58:32 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:58:32 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:32 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:32 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:32 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:32 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:33 localhost nova_compute[280228]: 2025-12-05 09:58:33.082 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:33 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:33 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:33 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:33 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:34 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:34 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:34 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:34 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:34 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:34 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:35 localhost nova_compute[280228]: 2025-12-05 09:58:35.170 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:35 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:35 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:35 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:35 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:36 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:36 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:36 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:36 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:36 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:37 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:37 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:37 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:37 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:38 localhost nova_compute[280228]: 2025-12-05 09:58:38.086 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:38 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:38 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:38 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:38 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:38 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:38 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.27006 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Dec 5 04:58:39 localhost ceph-mgr[286454]: [cephadm INFO root] Reconfig service osd.default_drive_group Dec 5 04:58:39 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:39 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:39 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:58:39 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 540 MiB used, 41 GiB / 42 GiB avail Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 5 04:58:39 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 04:58:40 localhost nova_compute[280228]: 2025-12-05 09:58:40.176 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:40 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mon.np0005546420 172.18.0.107:0/2212586883; not ready for session (expect reconnect) Dec 5 04:58:40 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 04:58:40 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.26485 172.18.0.106:0/1839705183' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 04:58:40 localhost ceph-mgr[286454]: mgr finish mon failed to return metadata for mon.np0005546420: (2) No such file or directory Dec 5 04:58:40 localhost ceph-mon[292820]: Reconfig service osd.default_drive_group Dec 5 04:58:40 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:40 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:40 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:40 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:40 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.643772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928720643855, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 532, "num_deletes": 252, "total_data_size": 1657523, "memory_usage": 1667568, "flush_reason": "Manual Compaction"} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928720653741, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1073438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14974, "largest_seqno": 15501, "table_properties": {"data_size": 1070507, "index_size": 914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7627, "raw_average_key_size": 20, "raw_value_size": 1064407, "raw_average_value_size": 2892, "num_data_blocks": 36, "num_entries": 368, "num_filter_entries": 368, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928710, "oldest_key_time": 1764928710, "file_creation_time": 1764928720, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10018 microseconds, and 5036 cpu microseconds. Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.653798) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1073438 bytes OK Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.653826) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.655391) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.655417) EVENT_LOG_v1 {"time_micros": 1764928720655409, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.655447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1654315, prev total WAL file size 1654315, number of live WAL files 2. Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.656233) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1048KB)], [21(17MB)] Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928720656320, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19213956, "oldest_snapshot_seqno": -1} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10479 keys, 16079779 bytes, temperature: kUnknown Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928720763063, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16079779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16019950, "index_size": 32528, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26245, "raw_key_size": 281949, "raw_average_key_size": 26, "raw_value_size": 15840837, "raw_average_value_size": 1511, "num_data_blocks": 1234, "num_entries": 10479, "num_filter_entries": 10479, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928720, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.763479) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16079779 bytes Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.765199) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.8 rd, 150.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 17.3 +0.0 blob) out(15.3 +0.0 blob), read-write-amplify(32.9) write-amplify(15.0) OK, records in: 11006, records dropped: 527 output_compression: NoCompression Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.765231) EVENT_LOG_v1 {"time_micros": 1764928720765217, "job": 10, "event": "compaction_finished", "compaction_time_micros": 106863, "compaction_time_cpu_micros": 45302, "output_level": 6, "num_output_files": 1, "total_output_size": 16079779, "num_input_records": 11006, "num_output_records": 10479, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928720765612, "job": 10, "event": "table_file_deletion", "file_number": 23} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928720768358, "job": 10, "event": "table_file_deletion", "file_number": 21} Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.656108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.768419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.768427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.768430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.768434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:40 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:58:40.768437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:58:41 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:58:41 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:58:41 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e89 e89: 6 total, 6 up, 6 in Dec 5 04:58:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:41.295+0000 7f7d4fe78640 -1 mgr handle_mgr_map I was active but no longer am Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr handle_mgr_map I was active but no longer am Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr respawn e: '/usr/bin/ceph-mgr' Dec 5 04:58:41 localhost systemd[1]: session-69.scope: Deactivated successfully. Dec 5 04:58:41 localhost systemd[1]: session-69.scope: Consumed 17.735s CPU time. Dec 5 04:58:41 localhost systemd-logind[760]: Session 69 logged out. Waiting for processes to exit. Dec 5 04:58:41 localhost systemd-logind[760]: Removed session 69. Dec 5 04:58:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: ignoring --setuser ceph since I am not root Dec 5 04:58:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: ignoring --setgroup ceph since I am not root Dec 5 04:58:41 localhost ceph-mgr[286454]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 5 04:58:41 localhost ceph-mgr[286454]: pidfile_write: ignore empty --pid-file Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr[py] Loading python module 'alerts' Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr[py] Loading python module 'balancer' Dec 5 04:58:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:41.493+0000 7f9a0179e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 5 04:58:41 localhost ceph-mgr[286454]: mgr[py] Loading python module 'cephadm' Dec 5 04:58:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:41.563+0000 7f9a0179e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 5 04:58:41 localhost sshd[298732]: main: sshd: ssh-rsa algorithm is disabled Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:41 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/2749176520' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 04:58:41 localhost ceph-mon[292820]: Activating manager daemon np0005546420.aoeylc Dec 5 04:58:41 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/2749176520' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26485 ' entity='mgr.np0005546419.zhsnqq' Dec 5 04:58:41 localhost ceph-mon[292820]: Manager daemon np0005546420.aoeylc is now available Dec 5 04:58:41 localhost ceph-mon[292820]: removing stray HostCache host record np0005546416.localdomain.devices.0 Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"} : dispatch Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"}]': finished Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"} : dispatch Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546416.localdomain.devices.0"}]': finished Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546420.aoeylc/mirror_snapshot_schedule"} : dispatch Dec 5 04:58:41 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546420.aoeylc/trash_purge_schedule"} : dispatch Dec 5 04:58:41 localhost systemd-logind[760]: New session 70 of user ceph-admin. Dec 5 04:58:41 localhost systemd[1]: Started Session 70 of User ceph-admin. Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Loading python module 'crash' Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Loading python module 'dashboard' Dec 5 04:58:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:42.216+0000 7f9a0179e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 5 04:58:42 localhost podman[298848]: 2025-12-05 09:58:42.632336195 +0000 UTC m=+0.064385718 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:58:42 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Loading python module 'devicehealth' Dec 5 04:58:42 localhost podman[298848]: 2025-12-05 09:58:42.763011188 +0000 UTC m=+0.195060761 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container) Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Loading python module 'diskprediction_local' Dec 5 04:58:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:42.792+0000 7f9a0179e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 5 04:58:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 5 04:58:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 5 04:58:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: from numpy import show_config as show_numpy_config Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 5 04:58:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:42.939+0000 7f9a0179e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 5 04:58:42 localhost ceph-mgr[286454]: mgr[py] Loading python module 'influx' Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'insights' Dec 5 04:58:43 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:43.000+0000 7f9a0179e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'iostat' Dec 5 04:58:43 localhost nova_compute[280228]: 2025-12-05 09:58:43.090 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'k8sevents' Dec 5 04:58:43 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:43.123+0000 7f9a0179e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 5 04:58:43 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:43 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'localpool' Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'mds_autoscaler' Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'mirroring' Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'nfs' Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 5 04:58:43 localhost ceph-mgr[286454]: mgr[py] Loading python module 'orchestrator' Dec 5 04:58:43 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:43.897+0000 7f9a0179e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'osd_perf_query' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.043+0000 7f9a0179e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'osd_support' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.107+0000 7f9a0179e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'pg_autoscaler' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.163+0000 7f9a0179e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.229+0000 7f9a0179e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'progress' Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'prometheus' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.290+0000 7f9a0179e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mon[292820]: [05/Dec/2025:09:58:42] ENGINE Bus STARTING Dec 5 04:58:44 localhost ceph-mon[292820]: [05/Dec/2025:09:58:42] ENGINE Serving on http://172.18.0.107:8765 Dec 5 04:58:44 localhost ceph-mon[292820]: [05/Dec/2025:09:58:42] ENGINE Serving on https://172.18.0.107:7150 Dec 5 04:58:44 localhost ceph-mon[292820]: [05/Dec/2025:09:58:42] ENGINE Bus STARTED Dec 5 04:58:44 localhost ceph-mon[292820]: [05/Dec/2025:09:58:42] ENGINE Client ('172.18.0.107', 59856) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 04:58:44 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:44 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:44 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:44 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:44 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:44 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'rbd_support' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.587+0000 7f9a0179e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'restful' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.668+0000 7f9a0179e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:44 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'rgw' Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 5 04:58:44 localhost ceph-mgr[286454]: mgr[py] Loading python module 'rook' Dec 5 04:58:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:44.991+0000 7f9a0179e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost nova_compute[280228]: 2025-12-05 09:58:45.192 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:45.420+0000 7f9a0179e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'selftest' Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'snap_schedule' Dec 5 04:58:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:45.482+0000 7f9a0179e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'stats' Dec 5 04:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'status' Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd/host:np0005546418", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:45 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:45 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:45 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:45 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'telegraf' Dec 5 04:58:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:45.672+0000 7f9a0179e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost podman[299110]: 2025-12-05 09:58:45.712413473 +0000 UTC m=+0.110125969 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 04:58:45 localhost podman[299110]: 2025-12-05 09:58:45.726295152 +0000 UTC m=+0.124007598 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 04:58:45 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'telemetry' Dec 5 04:58:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:45.739+0000 7f9a0179e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost ceph-mgr[286454]: mgr[py] Loading python module 'test_orchestrator' Dec 5 04:58:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:45.876+0000 7f9a0179e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 5 04:58:45 localhost podman[299108]: 2025-12-05 09:58:45.909781014 +0000 UTC m=+0.305738687 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:58:45 localhost podman[299108]: 2025-12-05 09:58:45.981805477 +0000 UTC m=+0.377763170 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:58:46 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:58:46 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:46.029+0000 7f9a0179e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 5 04:58:46 localhost ceph-mgr[286454]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 5 04:58:46 localhost ceph-mgr[286454]: mgr[py] Loading python module 'volumes' Dec 5 04:58:46 localhost ceph-mgr[286454]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 5 04:58:46 localhost ceph-mgr[286454]: mgr[py] Loading python module 'zabbix' Dec 5 04:58:46 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:46.225+0000 7f9a0179e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 5 04:58:46 localhost ceph-mgr[286454]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 5 04:58:46 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T09:58:46.287+0000 7f9a0179e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 5 04:58:46 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f51e0 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Dec 5 04:58:46 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.107:6810/1562955515 Dec 5 04:58:46 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:46 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:46 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:46 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:58:46 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:46 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:47 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:58:47 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:58:47 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:58:47 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 04:58:47 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:58:47 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:58:48 localhost nova_compute[280228]: 2025-12-05 09:58:48.092 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:48 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:58:48 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:48 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:48 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:49 localhost podman[299840]: Dec 5 04:58:49 localhost podman[299840]: 2025-12-05 09:58:49.385002087 +0000 UTC m=+0.078839804 container create ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_jepsen, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vcs-type=git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph) Dec 5 04:58:49 localhost systemd[1]: Started libpod-conmon-ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd.scope. Dec 5 04:58:49 localhost podman[299840]: 2025-12-05 09:58:49.352374919 +0000 UTC m=+0.046212676 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:49 localhost systemd[1]: Started libcrun container. Dec 5 04:58:49 localhost podman[299840]: 2025-12-05 09:58:49.482047732 +0000 UTC m=+0.175885439 container init ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_jepsen, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218) Dec 5 04:58:49 localhost podman[299840]: 2025-12-05 09:58:49.494201827 +0000 UTC m=+0.188039534 container start ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_jepsen, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, release=1763362218, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Dec 5 04:58:49 localhost podman[299840]: 2025-12-05 09:58:49.49464912 +0000 UTC m=+0.188486878 container attach ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_jepsen, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_CLEAN=True, version=7, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:49 localhost systemd[1]: libpod-ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd.scope: Deactivated successfully. Dec 5 04:58:49 localhost flamboyant_jepsen[299855]: 167 167 Dec 5 04:58:49 localhost podman[299840]: 2025-12-05 09:58:49.499875242 +0000 UTC m=+0.193712989 container died ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_jepsen, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., name=rhceph, release=1763362218, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git) Dec 5 04:58:49 localhost podman[299860]: 2025-12-05 09:58:49.611090215 +0000 UTC m=+0.098591795 container remove ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_jepsen, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , release=1763362218, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, version=7) Dec 5 04:58:49 localhost systemd[1]: libpod-conmon-ed9f686730a41f8cbde81f876600a1b60cedf79ddca78b5a3942718e483348dd.scope: Deactivated successfully. Dec 5 04:58:49 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:58:49 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:58:49 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:49 localhost podman[239519]: time="2025-12-05T09:58:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:58:49 localhost podman[239519]: @ - - [05/Dec/2025:09:58:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 04:58:49 localhost podman[239519]: @ - - [05/Dec/2025:09:58:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18730 "" "Go-http-client/1.1" Dec 5 04:58:50 localhost nova_compute[280228]: 2025-12-05 09:58:50.246 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay-f1e967dc54186c45ca5e129aa35fcd871ab47f9bb23c2b489b2c30594d15f594-merged.mount: Deactivated successfully. Dec 5 04:58:50 localhost podman[299937]: Dec 5 04:58:50 localhost podman[299937]: 2025-12-05 09:58:50.451503671 +0000 UTC m=+0.076403589 container create 14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_curie, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z) Dec 5 04:58:50 localhost systemd[1]: Started libpod-conmon-14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4.scope. Dec 5 04:58:50 localhost systemd[1]: Started libcrun container. Dec 5 04:58:50 localhost podman[299937]: 2025-12-05 09:58:50.419346488 +0000 UTC m=+0.044246396 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:50 localhost podman[299937]: 2025-12-05 09:58:50.538871418 +0000 UTC m=+0.163771336 container init 14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_curie, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.buildah.version=1.41.4, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:58:50 localhost podman[299937]: 2025-12-05 09:58:50.549358632 +0000 UTC m=+0.174258560 container start 14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_curie, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, release=1763362218, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64) Dec 5 04:58:50 localhost podman[299937]: 2025-12-05 09:58:50.549778514 +0000 UTC m=+0.174678482 container attach 14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_curie, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True) Dec 5 04:58:50 localhost goofy_curie[299953]: 167 167 Dec 5 04:58:50 localhost systemd[1]: libpod-14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4.scope: Deactivated successfully. Dec 5 04:58:50 localhost podman[299937]: 2025-12-05 09:58:50.553749816 +0000 UTC m=+0.178649784 container died 14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_curie, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux ) Dec 5 04:58:50 localhost podman[299958]: 2025-12-05 09:58:50.65431898 +0000 UTC m=+0.091625599 container remove 14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_curie, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:58:50 localhost systemd[1]: libpod-conmon-14c5f6af8a2c6d3a8a9a7f1049dbfdf50beb9cbcf9972e9882c9686e7f7019b4.scope: Deactivated successfully. Dec 5 04:58:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:58:50 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:58:50 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:51 localhost systemd[1]: var-lib-containers-storage-overlay-bef78f49071716bc5c8d52ac7b97ae4db7ab8139754f0eb2bced6228b6043946-merged.mount: Deactivated successfully. Dec 5 04:58:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:58:51 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:58:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:52 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:52 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:52 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:52 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:52 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:58:52 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:58:52 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:52 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:53 localhost nova_compute[280228]: 2025-12-05 09:58:53.116 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:53 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:53 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:53 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:53 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:53 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:58:53 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:58:54 localhost podman[299984]: 2025-12-05 09:58:54.232872231 +0000 UTC m=+0.103579377 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 5 04:58:54 localhost podman[299982]: 2025-12-05 09:58:54.279929244 +0000 UTC m=+0.151275430 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 04:58:54 localhost podman[299982]: 2025-12-05 09:58:54.293831663 +0000 UTC m=+0.165177869 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:58:54 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:58:54 localhost podman[299983]: 2025-12-05 09:58:54.377843246 +0000 UTC m=+0.248534402 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:58:54 localhost podman[299983]: 2025-12-05 09:58:54.429027895 +0000 UTC m=+0.299719041 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent) Dec 5 04:58:54 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:58:54 localhost podman[299984]: 2025-12-05 09:58:54.450978353 +0000 UTC m=+0.321685539 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 04:58:54 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:58:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:58:54 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:58:54 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:54 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:54 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:55 localhost nova_compute[280228]: 2025-12-05 09:58:55.249 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:56 localhost ceph-mon[292820]: Saving service mon spec with placement label:mon Dec 5 04:58:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:58:56 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:56 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:57 localhost openstack_network_exporter[241668]: ERROR 09:58:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:58:57 localhost openstack_network_exporter[241668]: ERROR 09:58:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:58:57 localhost openstack_network_exporter[241668]: ERROR 09:58:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:58:57 localhost openstack_network_exporter[241668]: ERROR 09:58:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:58:57 localhost openstack_network_exporter[241668]: Dec 5 04:58:57 localhost openstack_network_exporter[241668]: ERROR 09:58:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:58:57 localhost openstack_network_exporter[241668]: Dec 5 04:58:57 localhost ceph-mon[292820]: Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:58:57 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:58:57 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:57 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:57 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:58:57 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:57 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:57 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:58:58 localhost nova_compute[280228]: 2025-12-05 09:58:58.119 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:58:58 localhost ceph-mon[292820]: Reconfiguring mon.np0005546418 (monmap changed)... Dec 5 04:58:58 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546418 on np0005546418.localdomain Dec 5 04:58:58 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:58 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:58 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:58:58 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Dec 5 04:58:58 localhost podman[300113]: Dec 5 04:58:58 localhost podman[300113]: 2025-12-05 09:58:58.958190044 +0000 UTC m=+0.109225152 container create 2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_wilson, release=1763362218, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:58:58 localhost systemd[293789]: Starting Mark boot as successful... Dec 5 04:58:58 localhost podman[300113]: 2025-12-05 09:58:58.885545772 +0000 UTC m=+0.036581290 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:58:59 localhost systemd[1]: Started libpod-conmon-2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df.scope. Dec 5 04:58:59 localhost systemd[293789]: Finished Mark boot as successful. Dec 5 04:58:59 localhost systemd[1]: Started libcrun container. Dec 5 04:58:59 localhost podman[300113]: 2025-12-05 09:58:59.057823279 +0000 UTC m=+0.208858387 container init 2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_wilson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.) Dec 5 04:58:59 localhost laughing_wilson[300129]: 167 167 Dec 5 04:58:59 localhost systemd[1]: libpod-2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df.scope: Deactivated successfully. Dec 5 04:58:59 localhost podman[300113]: 2025-12-05 09:58:59.182755754 +0000 UTC m=+0.333790912 container start 2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_wilson, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, io.buildah.version=1.41.4, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Dec 5 04:58:59 localhost podman[300113]: 2025-12-05 09:58:59.183439106 +0000 UTC m=+0.334474454 container attach 2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_wilson, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, ceph=True, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:58:59 localhost podman[300113]: 2025-12-05 09:58:59.189207644 +0000 UTC m=+0.340242852 container died 2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_wilson, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, RELEASE=main, io.buildah.version=1.41.4) Dec 5 04:58:59 localhost podman[300134]: 2025-12-05 09:58:59.280106319 +0000 UTC m=+0.188392485 container remove 2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_wilson, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, architecture=x86_64, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, io.openshift.expose-services=) Dec 5 04:58:59 localhost systemd[1]: libpod-conmon-2b99fb4bae36c217cfd5383c7173a992dd7c970f0f594bb895247a28b0d808df.scope: Deactivated successfully. Dec 5 04:58:59 localhost ceph-mon[292820]: Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:58:59 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:58:59 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:59 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:58:59 localhost ceph-mon[292820]: mon.np0005546419@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:58:59 localhost systemd[1]: var-lib-containers-storage-overlay-cc668879c2ff44be38d718975be895a267f390d94e79d35223571ecec147914f-merged.mount: Deactivated successfully. Dec 5 04:59:00 localhost nova_compute[280228]: 2025-12-05 09:59:00.252 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:00 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f51e0 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Dec 5 04:59:00 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:59:00 localhost ceph-mon[292820]: paxos.2).electionLogic(52) init, last seen epoch 52 Dec 5 04:59:00 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:00 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:59:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:59:03 localhost nova_compute[280228]: 2025-12-05 09:59:03.121 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:03 localhost systemd[1]: tmp-crun.40UEGR.mount: Deactivated successfully. Dec 5 04:59:03 localhost podman[300149]: 2025-12-05 09:59:03.196071564 +0000 UTC m=+0.081955380 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:59:03 localhost podman[300149]: 2025-12-05 09:59:03.204593287 +0000 UTC m=+0.090477103 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 04:59:03 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:59:03 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 handle_auth_request failed to assign global_id Dec 5 04:59:03 localhost podman[300148]: 2025-12-05 09:59:03.210428927 +0000 UTC m=+0.093593629 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 04:59:03 localhost podman[300148]: 2025-12-05 09:59:03.291514799 +0000 UTC m=+0.174679451 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 04:59:03 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:59:03 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 handle_auth_request failed to assign global_id Dec 5 04:59:03 localhost nova_compute[280228]: 2025-12-05 09:59:03.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:03 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 handle_auth_request failed to assign global_id Dec 5 04:59:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:59:03.906 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:59:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:59:03.906 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:59:03 localhost ovn_metadata_agent[158815]: 2025-12-05 09:59:03.907 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:59:04 localhost ceph-mds[283215]: mds.beacon.mds.np0005546419.rweotn missed beacon ack from the monitors Dec 5 04:59:04 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 handle_auth_request failed to assign global_id Dec 5 04:59:05 localhost nova_compute[280228]: 2025-12-05 09:59:05.255 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:05 localhost nova_compute[280228]: 2025-12-05 09:59:05.520 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:05 localhost nova_compute[280228]: 2025-12-05 09:59:05.521 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419 in quorum (ranks 0,1,2) Dec 5 04:59:05 localhost ceph-mon[292820]: Health check failed: 1/4 mons down, quorum np0005546418,np0005546421,np0005546419 (MON_DOWN) Dec 5 04:59:05 localhost ceph-mon[292820]: Health detail: HEALTH_WARN 1/4 mons down, quorum np0005546418,np0005546421,np0005546419 Dec 5 04:59:05 localhost ceph-mon[292820]: [WRN] MON_DOWN: 1/4 mons down, quorum np0005546418,np0005546421,np0005546419 Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546420 (rank 3) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Dec 5 04:59:05 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:05 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:59:05 localhost ceph-mon[292820]: paxos.2).electionLogic(54) init, last seen epoch 54 Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546419@2(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:05 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:06 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:59:06 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:59:06 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:59:06 localhost ceph-mon[292820]: mon.np0005546418 calling monitor election Dec 5 04:59:06 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:59:06 localhost ceph-mon[292820]: mon.np0005546418 is new leader, mons np0005546418,np0005546421,np0005546419,np0005546420 in quorum (ranks 0,1,2,3) Dec 5 04:59:06 localhost ceph-mon[292820]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005546418,np0005546421,np0005546419) Dec 5 04:59:06 localhost ceph-mon[292820]: Cluster is now healthy Dec 5 04:59:06 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:59:07 localhost nova_compute[280228]: 2025-12-05 09:59:07.518 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.160 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.502 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.506 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.506 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.643 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.644 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.644 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 04:59:08 localhost nova_compute[280228]: 2025-12-05 09:59:08.644 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.000 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.016 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.017 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.018 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.018 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.035 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.035 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.036 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.036 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.037 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:59:09 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 04:59:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1348487981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.491 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:59:09 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f4f20 mon_map magic: 0 from mon.2 v2:172.18.0.103:3300/0 Dec 5 04:59:09 localhost ceph-mon[292820]: mon.np0005546419@2(peon) e13 my rank is now 1 (was 2) Dec 5 04:59:09 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:59:09 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.557 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.557 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 04:59:09 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 5 04:59:09 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 5 04:59:09 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f51e0 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.737 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.738 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11751MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.739 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.739 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.970 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.971 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 04:59:09 localhost nova_compute[280228]: 2025-12-05 09:59:09.972 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.023 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 04:59:10 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.080 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.081 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.095 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.119 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.152 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 04:59:10 localhost nova_compute[280228]: 2025-12-05 09:59:10.258 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:10 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:10 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:10 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:10 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:10 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:11 localhost ceph-mon[292820]: mon.np0005546419@1(probing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:11 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:59:11 localhost ceph-mon[292820]: paxos.1).electionLogic(56) init, last seen epoch 56 Dec 5 04:59:11 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:11 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost nova_compute[280228]: 2025-12-05 09:59:13.165 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:13 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:14 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:14 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:14 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:14 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:14 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:15 localhost nova_compute[280228]: 2025-12-05 09:59:15.262 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:59:15 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:15 localhost podman[300512]: 2025-12-05 09:59:15.967377792 +0000 UTC m=+0.100076780 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 04:59:16 localhost podman[300512]: 2025-12-05 09:59:16.008076377 +0000 UTC m=+0.140775335 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, vcs-type=git, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm) Dec 5 04:59:16 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:59:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:59:16 localhost systemd[1]: tmp-crun.lEDdNk.mount: Deactivated successfully. Dec 5 04:59:16 localhost podman[300567]: 2025-12-05 09:59:16.143061124 +0000 UTC m=+0.096519980 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 04:59:16 localhost podman[300567]: 2025-12-05 09:59:16.181572282 +0000 UTC m=+0.135031208 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 04:59:16 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 handle_auth_request failed to assign global_id Dec 5 04:59:16 localhost ceph-mon[292820]: paxos.1).electionLogic(57) init, last seen epoch 57, mid-election, bumping Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419@1(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:16 localhost ceph-mds[283215]: mds.beacon.mds.np0005546419.rweotn missed beacon ack from the monitors Dec 5 04:59:16 localhost ceph-mon[292820]: Remove daemons mon.np0005546418 Dec 5 04:59:16 localhost ceph-mon[292820]: Safe to remove mon.np0005546418: new quorum should be ['np0005546421', 'np0005546419', 'np0005546420'] (from ['np0005546421', 'np0005546419', 'np0005546420']) Dec 5 04:59:16 localhost ceph-mon[292820]: Removing monitor np0005546418 from monmap... Dec 5 04:59:16 localhost ceph-mon[292820]: Removing daemon mon.np0005546418 from np0005546418.localdomain -- ports [] Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546421 is new leader, mons np0005546421,np0005546420 in quorum (ranks 0,2) Dec 5 04:59:16 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:59:16 localhost ceph-mon[292820]: Health check failed: 1/3 mons down, quorum np0005546421,np0005546420 (MON_DOWN) Dec 5 04:59:16 localhost ceph-mon[292820]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005546421,np0005546420 Dec 5 04:59:16 localhost ceph-mon[292820]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005546421,np0005546420 Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Removed label mon from host np0005546418.localdomain Dec 5 04:59:16 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546418.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:16 localhost ceph-mon[292820]: mon.np0005546419@1(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.682 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.689 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.748 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.751 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.751 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.752 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.752 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 04:59:16 localhost nova_compute[280228]: 2025-12-05 09:59:16.765 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 04:59:17 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:59:17 localhost ceph-mon[292820]: Removed label mgr from host np0005546418.localdomain Dec 5 04:59:17 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546418.garyvl (monmap changed)... Dec 5 04:59:17 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546418.garyvl on np0005546418.localdomain Dec 5 04:59:17 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 04:59:17 localhost ceph-mon[292820]: mon.np0005546421 is new leader, mons np0005546421,np0005546419,np0005546420 in quorum (ranks 0,1,2) Dec 5 04:59:17 localhost ceph-mon[292820]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005546421,np0005546420) Dec 5 04:59:17 localhost ceph-mon[292820]: Cluster is now healthy Dec 5 04:59:17 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:59:17 localhost nova_compute[280228]: 2025-12-05 09:59:17.762 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:17 localhost nova_compute[280228]: 2025-12-05 09:59:17.897 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:17 localhost nova_compute[280228]: 2025-12-05 09:59:17.897 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:17 localhost nova_compute[280228]: 2025-12-05 09:59:17.898 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:17 localhost nova_compute[280228]: 2025-12-05 09:59:17.898 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:17 localhost nova_compute[280228]: 2025-12-05 09:59:17.899 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 04:59:18 localhost nova_compute[280228]: 2025-12-05 09:59:18.168 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:18 localhost ceph-mon[292820]: Reconfiguring crash.np0005546418 (monmap changed)... Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:18 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546418 on np0005546418.localdomain Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546418.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:18 localhost ceph-mon[292820]: Removed label _admin from host np0005546418.localdomain Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:18 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:18 localhost podman[300670]: Dec 5 04:59:18 localhost podman[300670]: 2025-12-05 09:59:18.946648578 +0000 UTC m=+0.058064972 container create 836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_goodall, ceph=True, com.redhat.component=rhceph-container, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 04:59:18 localhost systemd[1]: Started libpod-conmon-836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010.scope. Dec 5 04:59:18 localhost systemd[1]: Started libcrun container. Dec 5 04:59:19 localhost podman[300670]: 2025-12-05 09:59:19.005622128 +0000 UTC m=+0.117038532 container init 836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_goodall, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Dec 5 04:59:19 localhost podman[300670]: 2025-12-05 09:59:19.019162556 +0000 UTC m=+0.130578950 container start 836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_goodall, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=1763362218, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Dec 5 04:59:19 localhost podman[300670]: 2025-12-05 09:59:18.919866602 +0000 UTC m=+0.031282986 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:19 localhost podman[300670]: 2025-12-05 09:59:19.019739715 +0000 UTC m=+0.131156149 container attach 836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_goodall, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:19 localhost flamboyant_goodall[300684]: 167 167 Dec 5 04:59:19 localhost systemd[1]: libpod-836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010.scope: Deactivated successfully. Dec 5 04:59:19 localhost podman[300670]: 2025-12-05 09:59:19.023864751 +0000 UTC m=+0.135281175 container died 836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_goodall, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64) Dec 5 04:59:19 localhost podman[300689]: 2025-12-05 09:59:19.099825566 +0000 UTC m=+0.070899309 container remove 836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_goodall, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:59:19 localhost systemd[1]: libpod-conmon-836ba545ff9d443e5342a0b640f7f8e60c57de50cbc1e74b710db041cc045010.scope: Deactivated successfully. Dec 5 04:59:19 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:59:19 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:59:19 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:19 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:59:19 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:19 localhost podman[300759]: Dec 5 04:59:19 localhost ceph-mon[292820]: mon.np0005546419@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:19 localhost podman[300759]: 2025-12-05 09:59:19.769887645 +0000 UTC m=+0.029096019 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:19 localhost podman[239519]: time="2025-12-05T09:59:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:59:19 localhost podman[300759]: 2025-12-05 09:59:19.884015557 +0000 UTC m=+0.143223951 container create 2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_sammet, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:59:19 localhost systemd[1]: Started libpod-conmon-2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5.scope. Dec 5 04:59:19 localhost systemd[1]: Started libcrun container. Dec 5 04:59:19 localhost systemd[1]: var-lib-containers-storage-overlay-155e9034d6004821818a71898642b270370b45212a3f773132666a1767345780-merged.mount: Deactivated successfully. Dec 5 04:59:19 localhost podman[300759]: 2025-12-05 09:59:19.960331912 +0000 UTC m=+0.219540256 container init 2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_sammet, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, vendor=Red Hat, Inc.) Dec 5 04:59:19 localhost wonderful_sammet[300775]: 167 167 Dec 5 04:59:19 localhost systemd[1]: libpod-2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5.scope: Deactivated successfully. Dec 5 04:59:19 localhost podman[300759]: 2025-12-05 09:59:19.990311698 +0000 UTC m=+0.249520042 container start 2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_sammet, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Dec 5 04:59:19 localhost podman[300759]: 2025-12-05 09:59:19.990733061 +0000 UTC m=+0.249941405 container attach 2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_sammet, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:19 localhost podman[300759]: 2025-12-05 09:59:19.993803176 +0000 UTC m=+0.253011550 container died 2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_sammet, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218) Dec 5 04:59:20 localhost podman[239519]: @ - - [05/Dec/2025:09:59:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156072 "" "Go-http-client/1.1" Dec 5 04:59:20 localhost systemd[1]: var-lib-containers-storage-overlay-e3da2ec95bec413bd2c037e8bb0f70e0bb78a46346beaa017140d345dc587c3f-merged.mount: Deactivated successfully. Dec 5 04:59:20 localhost podman[300780]: 2025-12-05 09:59:20.071422721 +0000 UTC m=+0.087290105 container remove 2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_sammet, vcs-type=git, RELEASE=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1763362218, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Dec 5 04:59:20 localhost systemd[1]: libpod-conmon-2407b6e24159c4b310858543362c91838edbac445434b9b3fa973a39b35c20f5.scope: Deactivated successfully. Dec 5 04:59:20 localhost podman[239519]: @ - - [05/Dec/2025:09:59:20 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18734 "" "Go-http-client/1.1" Dec 5 04:59:20 localhost nova_compute[280228]: 2025-12-05 09:59:20.288 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:20 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:59:20 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:59:20 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:20 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:59:20 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:20 localhost podman[300856]: Dec 5 04:59:20 localhost podman[300856]: 2025-12-05 09:59:20.874550338 +0000 UTC m=+0.080640340 container create a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_allen, CEPH_POINT_RELEASE=, vcs-type=git, version=7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4) Dec 5 04:59:20 localhost systemd[1]: Started libpod-conmon-a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8.scope. Dec 5 04:59:20 localhost systemd[1]: Started libcrun container. Dec 5 04:59:20 localhost podman[300856]: 2025-12-05 09:59:20.842363884 +0000 UTC m=+0.048453926 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:20 localhost podman[300856]: 2025-12-05 09:59:20.94234986 +0000 UTC m=+0.148439852 container init a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_allen, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Dec 5 04:59:20 localhost strange_allen[300871]: 167 167 Dec 5 04:59:20 localhost podman[300856]: 2025-12-05 09:59:20.953645648 +0000 UTC m=+0.159735670 container start a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_allen, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, ceph=True, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 5 04:59:20 localhost podman[300856]: 2025-12-05 09:59:20.954409392 +0000 UTC m=+0.160499434 container attach a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_allen, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64) Dec 5 04:59:20 localhost systemd[1]: libpod-a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8.scope: Deactivated successfully. Dec 5 04:59:20 localhost podman[300856]: 2025-12-05 09:59:20.956964801 +0000 UTC m=+0.163054833 container died a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_allen, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, release=1763362218, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-type=git, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:21 localhost systemd[1]: var-lib-containers-storage-overlay-b811c777e057d93034be14dd4c5e87a5072593f74578c5dcf31997fe10210c0e-merged.mount: Deactivated successfully. Dec 5 04:59:21 localhost podman[300876]: 2025-12-05 09:59:21.050488237 +0000 UTC m=+0.087892994 container remove a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_allen, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, name=rhceph, version=7, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:59:21 localhost systemd[1]: libpod-conmon-a544379f486cded2095a39178ea7fb4aa3b43c12f0ffb46df485f6dcf124fdf8.scope: Deactivated successfully. Dec 5 04:59:21 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:59:21 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:59:21 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:21 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:21 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:21 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:21 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:21 localhost podman[300952]: Dec 5 04:59:21 localhost podman[300952]: 2025-12-05 09:59:21.85611399 +0000 UTC m=+0.080297919 container create 005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_haibt, vendor=Red Hat, Inc., version=7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:59:21 localhost systemd[1]: Started libpod-conmon-005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d.scope. Dec 5 04:59:21 localhost podman[300952]: 2025-12-05 09:59:21.823779902 +0000 UTC m=+0.047963911 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:21 localhost systemd[1]: Started libcrun container. Dec 5 04:59:21 localhost podman[300952]: 2025-12-05 09:59:21.93808319 +0000 UTC m=+0.162267109 container init 005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_haibt, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Dec 5 04:59:21 localhost podman[300952]: 2025-12-05 09:59:21.949088079 +0000 UTC m=+0.173271988 container start 005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_haibt, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, com.redhat.component=rhceph-container, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=) Dec 5 04:59:21 localhost podman[300952]: 2025-12-05 09:59:21.949276715 +0000 UTC m=+0.173460614 container attach 005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_haibt, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, ceph=True, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Dec 5 04:59:21 localhost objective_haibt[300967]: 167 167 Dec 5 04:59:21 localhost systemd[1]: libpod-005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d.scope: Deactivated successfully. Dec 5 04:59:21 localhost podman[300952]: 2025-12-05 09:59:21.952504045 +0000 UTC m=+0.176688044 container died 005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_haibt, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:22 localhost systemd[1]: var-lib-containers-storage-overlay-0358839a3e2ae8b9702b1a155a186c5b64d476cbe99cb7ca5bd0ede49199c0de-merged.mount: Deactivated successfully. Dec 5 04:59:22 localhost podman[300973]: 2025-12-05 09:59:22.04080541 +0000 UTC m=+0.078811623 container remove 005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_haibt, maintainer=Guillaume Abrioux , distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, version=7, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7) Dec 5 04:59:22 localhost systemd[1]: libpod-conmon-005ffee1c60c50bee5c12414965539be5acf160a8134d9a1d3531cbca2042d7d.scope: Deactivated successfully. Dec 5 04:59:22 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:59:22 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:59:22 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:22 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:22 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:22 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:22 localhost podman[301042]: Dec 5 04:59:22 localhost podman[301042]: 2025-12-05 09:59:22.796509363 +0000 UTC m=+0.076857593 container create 8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_pascal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:59:22 localhost systemd[1]: Started libpod-conmon-8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d.scope. Dec 5 04:59:22 localhost systemd[1]: Started libcrun container. Dec 5 04:59:22 localhost podman[301042]: 2025-12-05 09:59:22.757168709 +0000 UTC m=+0.037516979 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:22 localhost podman[301042]: 2025-12-05 09:59:22.860194408 +0000 UTC m=+0.140542628 container init 8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_pascal, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 04:59:22 localhost podman[301042]: 2025-12-05 09:59:22.870900798 +0000 UTC m=+0.151249078 container start 8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_pascal, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, RELEASE=main) Dec 5 04:59:22 localhost podman[301042]: 2025-12-05 09:59:22.871243859 +0000 UTC m=+0.151592089 container attach 8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_pascal, io.buildah.version=1.41.4, release=1763362218, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Dec 5 04:59:22 localhost nifty_pascal[301057]: 167 167 Dec 5 04:59:22 localhost systemd[1]: libpod-8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d.scope: Deactivated successfully. Dec 5 04:59:22 localhost podman[301042]: 2025-12-05 09:59:22.875560343 +0000 UTC m=+0.155908573 container died 8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_pascal, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218) Dec 5 04:59:22 localhost systemd[1]: var-lib-containers-storage-overlay-5d28135a942f072cd2a529140e742c37236a077635d623f60bfd30b866c54873-merged.mount: Deactivated successfully. Dec 5 04:59:22 localhost podman[301062]: 2025-12-05 09:59:22.959346138 +0000 UTC m=+0.075385477 container remove 8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_pascal, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:59:22 localhost systemd[1]: libpod-conmon-8649f7a7f86385b30f022e5c88655027b3c62459613090361cfe1500f736a67d.scope: Deactivated successfully. Dec 5 04:59:23 localhost nova_compute[280228]: 2025-12-05 09:59:23.172 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:23 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:59:23 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:59:23 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:23 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:59:23 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:23 localhost podman[301132]: Dec 5 04:59:23 localhost podman[301132]: 2025-12-05 09:59:23.701869274 +0000 UTC m=+0.084706345 container create c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_kilby, RELEASE=main, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, version=7, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 04:59:23 localhost systemd[1]: Started libpod-conmon-c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1.scope. Dec 5 04:59:23 localhost systemd[1]: Started libcrun container. Dec 5 04:59:23 localhost podman[301132]: 2025-12-05 09:59:23.671472476 +0000 UTC m=+0.054309557 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:23 localhost podman[301132]: 2025-12-05 09:59:23.782008817 +0000 UTC m=+0.164845888 container init c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_kilby, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, version=7, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 04:59:23 localhost podman[301132]: 2025-12-05 09:59:23.79439122 +0000 UTC m=+0.177228291 container start c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_kilby, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, architecture=x86_64, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, release=1763362218, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Dec 5 04:59:23 localhost podman[301132]: 2025-12-05 09:59:23.794690499 +0000 UTC m=+0.177527590 container attach c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_kilby, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux ) Dec 5 04:59:23 localhost hardcore_kilby[301146]: 167 167 Dec 5 04:59:23 localhost systemd[1]: libpod-c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1.scope: Deactivated successfully. Dec 5 04:59:23 localhost podman[301132]: 2025-12-05 09:59:23.799134546 +0000 UTC m=+0.181971617 container died c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_kilby, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph) Dec 5 04:59:23 localhost podman[301151]: 2025-12-05 09:59:23.888015219 +0000 UTC m=+0.080536486 container remove c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_kilby, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git) Dec 5 04:59:23 localhost systemd[1]: libpod-conmon-c767a76269a5cb5577281ce652f35d1c64dcd22186f3e1ded2298c3fd29d0bc1.scope: Deactivated successfully. Dec 5 04:59:23 localhost systemd[1]: var-lib-containers-storage-overlay-5f39f15e47f51738e438ae333cdaf00bb2134c2d5f657cd747d648a8a5b28ad3-merged.mount: Deactivated successfully. Dec 5 04:59:24 localhost ceph-mon[292820]: Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 04:59:24 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 04:59:24 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:24 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:24 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:24 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:24 localhost ceph-mon[292820]: mon.np0005546419@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:59:25 localhost systemd[1]: tmp-crun.vnshS0.mount: Deactivated successfully. Dec 5 04:59:25 localhost podman[301167]: 2025-12-05 09:59:25.22528992 +0000 UTC m=+0.096170430 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 04:59:25 localhost podman[301167]: 2025-12-05 09:59:25.236789255 +0000 UTC m=+0.107669755 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:59:25 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:59:25 localhost nova_compute[280228]: 2025-12-05 09:59:25.292 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:25 localhost podman[301169]: 2025-12-05 09:59:25.341234398 +0000 UTC m=+0.209567739 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute) Dec 5 04:59:25 localhost podman[301169]: 2025-12-05 09:59:25.352472605 +0000 UTC m=+0.220805976 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 04:59:25 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:59:25 localhost podman[301168]: 2025-12-05 09:59:25.428271164 +0000 UTC m=+0.299060931 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 04:59:25 localhost podman[301168]: 2025-12-05 09:59:25.458961041 +0000 UTC m=+0.329750798 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:59:25 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:59:25 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:59:25 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:59:25 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:25 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:59:25 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:59:25 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:25 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:59:26 localhost systemd[1]: tmp-crun.ME4Hur.mount: Deactivated successfully. Dec 5 04:59:27 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:27 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:59:27 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:59:27 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:27 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:59:27 localhost openstack_network_exporter[241668]: ERROR 09:59:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:59:27 localhost openstack_network_exporter[241668]: ERROR 09:59:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:59:27 localhost openstack_network_exporter[241668]: ERROR 09:59:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:59:27 localhost openstack_network_exporter[241668]: ERROR 09:59:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:59:27 localhost openstack_network_exporter[241668]: Dec 5 04:59:27 localhost openstack_network_exporter[241668]: ERROR 09:59:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:59:27 localhost openstack_network_exporter[241668]: Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:28 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:28 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:28 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:28 localhost nova_compute[280228]: 2025-12-05 09:59:28.175 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:29 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:59:29 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:59:29 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:29 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:59:29 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:29 localhost ceph-mon[292820]: mon.np0005546419@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:30 localhost nova_compute[280228]: 2025-12-05 09:59:30.296 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:30 localhost ceph-mon[292820]: Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 04:59:30 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 04:59:30 localhost ceph-mon[292820]: Added label _no_schedule to host np0005546418.localdomain Dec 5 04:59:30 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:30 localhost ceph-mon[292820]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005546418.localdomain Dec 5 04:59:30 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:30 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:30 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:30 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:30 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.828207) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770828282, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2144, "num_deletes": 260, "total_data_size": 6359058, "memory_usage": 6701984, "flush_reason": "Manual Compaction"} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770850955, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3669469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15507, "largest_seqno": 17645, "table_properties": {"data_size": 3660808, "index_size": 4974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 23285, "raw_average_key_size": 22, "raw_value_size": 3641323, "raw_average_value_size": 3481, "num_data_blocks": 208, "num_entries": 1046, "num_filter_entries": 1046, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928720, "oldest_key_time": 1764928720, "file_creation_time": 1764928770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 22830 microseconds, and 8701 cpu microseconds. Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.851031) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3669469 bytes OK Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.851065) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.852871) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.852898) EVENT_LOG_v1 {"time_micros": 1764928770852890, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.852924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 6348074, prev total WAL file size 6348074, number of live WAL files 2. Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.854521) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323733' seq:72057594037927935, type:22 .. '6B760031353331' seq:0, type:0; will stop at (end) Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3583KB)], [24(15MB)] Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770854570, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19749248, "oldest_snapshot_seqno": -1} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11021 keys, 18796510 bytes, temperature: kUnknown Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770957462, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 18796510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18732440, "index_size": 35425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 296643, "raw_average_key_size": 26, "raw_value_size": 18543072, "raw_average_value_size": 1682, "num_data_blocks": 1341, "num_entries": 11021, "num_filter_entries": 11021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.958072) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 18796510 bytes Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.960629) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.3 rd, 182.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 15.3 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(10.5) write-amplify(5.1) OK, records in: 11525, records dropped: 504 output_compression: NoCompression Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.960667) EVENT_LOG_v1 {"time_micros": 1764928770960650, "job": 12, "event": "compaction_finished", "compaction_time_micros": 103237, "compaction_time_cpu_micros": 50027, "output_level": 6, "num_output_files": 1, "total_output_size": 18796510, "num_input_records": 11525, "num_output_records": 11021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770962066, "job": 12, "event": "table_file_deletion", "file_number": 26} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928770965243, "job": 12, "event": "table_file_deletion", "file_number": 24} Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.854421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.965615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.965628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.965631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.965635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:30.965639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:31 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:59:31 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:59:31 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:31 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:59:31 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:32 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:59:32 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"} : dispatch Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"} : dispatch Dec 5 04:59:32 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain"}]': finished Dec 5 04:59:33 localhost nova_compute[280228]: 2025-12-05 09:59:33.176 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:33 localhost ceph-mon[292820]: Reconfiguring osd.5 (monmap changed)... Dec 5 04:59:33 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 04:59:33 localhost ceph-mon[292820]: Removed host np0005546418.localdomain Dec 5 04:59:33 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:33 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:33 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:33 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 04:59:34 localhost systemd[1]: tmp-crun.39rnpa.mount: Deactivated successfully. Dec 5 04:59:34 localhost systemd[1]: tmp-crun.krk4fj.mount: Deactivated successfully. Dec 5 04:59:34 localhost podman[301228]: 2025-12-05 09:59:34.264493787 +0000 UTC m=+0.146715399 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 04:59:34 localhost podman[301227]: 2025-12-05 09:59:34.222104259 +0000 UTC m=+0.107077136 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 04:59:34 localhost podman[301228]: 2025-12-05 09:59:34.280166991 +0000 UTC m=+0.162388633 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 04:59:34 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 04:59:34 localhost podman[301227]: 2025-12-05 09:59:34.305804382 +0000 UTC m=+0.190777239 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 04:59:34 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 04:59:34 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 04:59:34 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 04:59:34 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:34 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:34 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:34 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:34 localhost ceph-mon[292820]: mon.np0005546419@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:35 localhost nova_compute[280228]: 2025-12-05 09:59:35.351 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:35 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 04:59:35 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 04:59:35 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:35 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:59:35 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:36 localhost ceph-mon[292820]: Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 04:59:36 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:59:36 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:36 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:36 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:59:36 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:38 localhost nova_compute[280228]: 2025-12-05 09:59:38.223 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:39 localhost ceph-mon[292820]: mon.np0005546419@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:40 localhost nova_compute[280228]: 2025-12-05 09:59:40.354 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:40 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:59:40 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:40 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:40 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:41 localhost ceph-mon[292820]: Saving service mon spec with placement label:mon Dec 5 04:59:43 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f51e0 mon_map magic: 0 from mon.1 v2:172.18.0.103:3300/0 Dec 5 04:59:43 localhost ceph-mon[292820]: mon.np0005546419@1(peon) e14 my rank is now 0 (was 1) Dec 5 04:59:43 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:59:43 localhost ceph-mgr[286454]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 5 04:59:43 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f4f20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:59:43 localhost ceph-mon[292820]: paxos.0).electionLogic(62) init, last seen epoch 62 Dec 5 04:59:43 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 is new leader, mons np0005546419,np0005546420 in quorum (ranks 0,1) Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : monmap epoch 14 Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:59:43.133976+0000 Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : created 2025-12-05T07:49:07.934655+0000 Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419 Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420 Dec 5 04:59:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e35: np0005546420.aoeylc(active, since 61s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq Dec 5 04:59:43 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 5 04:59:43 localhost nova_compute[280228]: 2025-12-05 09:59:43.226 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:43 localhost ceph-mon[292820]: Remove daemons mon.np0005546421 Dec 5 04:59:43 localhost ceph-mon[292820]: Safe to remove mon.np0005546421: new quorum should be ['np0005546419', 'np0005546420'] (from ['np0005546419', 'np0005546420']) Dec 5 04:59:43 localhost ceph-mon[292820]: Removing monitor np0005546421 from monmap... Dec 5 04:59:43 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "mon rm", "name": "np0005546421"} : dispatch Dec 5 04:59:43 localhost ceph-mon[292820]: Removing daemon mon.np0005546421 from np0005546421.localdomain -- ports [] Dec 5 04:59:43 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 04:59:43 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 04:59:43 localhost ceph-mon[292820]: mon.np0005546419 is new leader, mons np0005546419,np0005546420 in quorum (ranks 0,1) Dec 5 04:59:43 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 04:59:43 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 04:59:44 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:44 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:44 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 04:59:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:59:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:59:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:59:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:59:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:59:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:59:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 04:59:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost nova_compute[280228]: 2025-12-05 09:59:45.108 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 04:59:45 localhost nova_compute[280228]: 2025-12-05 09:59:45.143 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 04:59:45 localhost nova_compute[280228]: 2025-12-05 09:59:45.144 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 04:59:45 localhost nova_compute[280228]: 2025-12-05 09:59:45.144 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 04:59:45 localhost nova_compute[280228]: 2025-12-05 09:59:45.172 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 04:59:45 localhost nova_compute[280228]: 2025-12-05 09:59:45.361 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:59:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:45 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:45 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:45 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:45 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 04:59:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:45 localhost podman[301700]: Dec 5 04:59:45 localhost podman[301700]: 2025-12-05 09:59:45.987646527 +0000 UTC m=+0.069680691 container create 3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_shaw, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 04:59:46 localhost systemd[1]: Started libpod-conmon-3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96.scope. Dec 5 04:59:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 04:59:46 localhost systemd[1]: Started libcrun container. Dec 5 04:59:46 localhost podman[301700]: 2025-12-05 09:59:45.954098032 +0000 UTC m=+0.036132196 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:46 localhost podman[301700]: 2025-12-05 09:59:46.067992657 +0000 UTC m=+0.150026781 container init 3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_shaw, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container) Dec 5 04:59:46 localhost podman[301700]: 2025-12-05 09:59:46.081629368 +0000 UTC m=+0.163663462 container start 3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_shaw, ceph=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Dec 5 04:59:46 localhost podman[301700]: 2025-12-05 09:59:46.082291679 +0000 UTC m=+0.164325803 container attach 3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_shaw, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-type=git, version=7) Dec 5 04:59:46 localhost optimistic_shaw[301714]: 167 167 Dec 5 04:59:46 localhost systemd[1]: libpod-3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96.scope: Deactivated successfully. Dec 5 04:59:46 localhost podman[301700]: 2025-12-05 09:59:46.13971964 +0000 UTC m=+0.221753764 container died 3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_shaw, release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_CLEAN=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main) Dec 5 04:59:46 localhost podman[301715]: 2025-12-05 09:59:46.134883672 +0000 UTC m=+0.092532237 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 04:59:46 localhost podman[301730]: 2025-12-05 09:59:46.202126087 +0000 UTC m=+0.105457096 container remove 3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_shaw, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:46 localhost systemd[1]: libpod-conmon-3bad1e6188adcf48761a4fcdb9caac10b2ff76b2043b9cace8f0434022f33e96.scope: Deactivated successfully. Dec 5 04:59:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 04:59:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:59:46 localhost podman[301715]: 2025-12-05 09:59:46.275218063 +0000 UTC m=+0.232866608 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.) Dec 5 04:59:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:46 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 04:59:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:59:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:46 localhost podman[301754]: 2025-12-05 09:59:46.375169887 +0000 UTC m=+0.141212439 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 04:59:46 localhost podman[301754]: 2025-12-05 09:59:46.390766099 +0000 UTC m=+0.156808641 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:59:46 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 04:59:46 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 04:59:46 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 04:59:46 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:46 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:46 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:46 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 04:59:46 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 04:59:46 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 04:59:46 localhost podman[301826]: Dec 5 04:59:46 localhost podman[301826]: 2025-12-05 09:59:46.890848523 +0000 UTC m=+0.045535067 container create 1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_rhodes, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7) Dec 5 04:59:46 localhost systemd[1]: Started libpod-conmon-1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c.scope. Dec 5 04:59:46 localhost systemd[1]: Started libcrun container. Dec 5 04:59:46 localhost podman[301826]: 2025-12-05 09:59:46.942431934 +0000 UTC m=+0.097118478 container init 1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_rhodes, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 5 04:59:46 localhost podman[301826]: 2025-12-05 09:59:46.952331519 +0000 UTC m=+0.107018083 container start 1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_rhodes, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7) Dec 5 04:59:46 localhost suspicious_rhodes[301841]: 167 167 Dec 5 04:59:46 localhost systemd[1]: libpod-1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c.scope: Deactivated successfully. Dec 5 04:59:46 localhost podman[301826]: 2025-12-05 09:59:46.952839186 +0000 UTC m=+0.107525770 container attach 1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_rhodes, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, release=1763362218, ceph=True, name=rhceph, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 04:59:46 localhost podman[301826]: 2025-12-05 09:59:46.955747055 +0000 UTC m=+0.110433629 container died 1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_rhodes, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 5 04:59:46 localhost podman[301826]: 2025-12-05 09:59:46.873699783 +0000 UTC m=+0.028386337 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:46 localhost systemd[1]: var-lib-containers-storage-overlay-d174fe8e250024fd268f8df365e797dd1e352484b86460c776f9f5c677c92b25-merged.mount: Deactivated successfully. Dec 5 04:59:47 localhost systemd[1]: var-lib-containers-storage-overlay-92a8ee92f2b77c7f93ca2f7f6bf97450a7ed19257f120f201f20f23d1b21e519-merged.mount: Deactivated successfully. Dec 5 04:59:47 localhost podman[301846]: 2025-12-05 09:59:47.021630128 +0000 UTC m=+0.058807836 container remove 1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_rhodes, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Dec 5 04:59:47 localhost systemd[1]: libpod-conmon-1384a0110e9b950ec9e4f989cdb7cecc400bced21f64b9e3e1233b212b85ef4c.scope: Deactivated successfully. Dec 5 04:59:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:59:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:59:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:47 localhost podman[301922]: Dec 5 04:59:47 localhost podman[301922]: 2025-12-05 09:59:47.860538619 +0000 UTC m=+0.078173514 container create 08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_knuth, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:47 localhost systemd[1]: Started libpod-conmon-08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d.scope. Dec 5 04:59:47 localhost systemd[1]: Started libcrun container. Dec 5 04:59:47 localhost podman[301922]: 2025-12-05 09:59:47.830301516 +0000 UTC m=+0.047936451 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:47 localhost podman[301922]: 2025-12-05 09:59:47.944589693 +0000 UTC m=+0.162224588 container init 08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_knuth, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Dec 5 04:59:47 localhost podman[301922]: 2025-12-05 09:59:47.954732546 +0000 UTC m=+0.172367411 container start 08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_knuth, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph) Dec 5 04:59:47 localhost podman[301922]: 2025-12-05 09:59:47.955083706 +0000 UTC m=+0.172718631 container attach 08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_knuth, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, release=1763362218, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-type=git, name=rhceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 5 04:59:47 localhost suspicious_knuth[301937]: 167 167 Dec 5 04:59:47 localhost systemd[1]: libpod-08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d.scope: Deactivated successfully. Dec 5 04:59:47 localhost podman[301922]: 2025-12-05 09:59:47.958506563 +0000 UTC m=+0.176141478 container died 08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_knuth, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, RELEASE=main, vcs-type=git, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main) Dec 5 04:59:48 localhost systemd[1]: tmp-crun.fugt89.mount: Deactivated successfully. Dec 5 04:59:48 localhost systemd[1]: var-lib-containers-storage-overlay-3b7b5d5d4e521467aa15f81a47c8fc5185068a66bebb67ff1e49d0c6ad60e888-merged.mount: Deactivated successfully. Dec 5 04:59:48 localhost podman[301942]: 2025-12-05 09:59:48.064827494 +0000 UTC m=+0.094127896 container remove 08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_knuth, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, build-date=2025-11-26T19:44:28Z) Dec 5 04:59:48 localhost systemd[1]: libpod-conmon-08e78785668e9284e5d485e1b539498fb71199b628d7872fd0704de5d606950d.scope: Deactivated successfully. Dec 5 04:59:48 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:48 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:48 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 04:59:48 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 04:59:48 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 04:59:48 localhost nova_compute[280228]: 2025-12-05 09:59:48.229 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:59:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:59:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 04:59:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:48 localhost podman[302017]: Dec 5 04:59:48 localhost podman[302017]: 2025-12-05 09:59:48.958612198 +0000 UTC m=+0.079252528 container create 7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_raman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, version=7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 04:59:48 localhost systemd[1]: Started libpod-conmon-7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f.scope. Dec 5 04:59:49 localhost systemd[1]: Started libcrun container. Dec 5 04:59:49 localhost podman[302017]: 2025-12-05 09:59:48.925152165 +0000 UTC m=+0.045792515 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:49 localhost podman[302017]: 2025-12-05 09:59:49.029357181 +0000 UTC m=+0.149997521 container init 7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_raman, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, distribution-scope=public) Dec 5 04:59:49 localhost podman[302017]: 2025-12-05 09:59:49.038451951 +0000 UTC m=+0.159092291 container start 7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_raman, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph) Dec 5 04:59:49 localhost podman[302017]: 2025-12-05 09:59:49.038849004 +0000 UTC m=+0.159489334 container attach 7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_raman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z) Dec 5 04:59:49 localhost interesting_raman[302033]: 167 167 Dec 5 04:59:49 localhost systemd[1]: libpod-7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f.scope: Deactivated successfully. Dec 5 04:59:49 localhost podman[302017]: 2025-12-05 09:59:49.043035883 +0000 UTC m=+0.163676243 container died 7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_raman, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Dec 5 04:59:49 localhost podman[302038]: 2025-12-05 09:59:49.145937619 +0000 UTC m=+0.091978969 container remove 7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_raman, maintainer=Guillaume Abrioux , ceph=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7) Dec 5 04:59:49 localhost systemd[1]: libpod-conmon-7d97804fa7cae062bf03328484974aee37be6ef1ba64ff3f323608fa979ff69f.scope: Deactivated successfully. Dec 5 04:59:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:59:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:59:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:49 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:49 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:49 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 04:59:49 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:49 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:49 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 04:59:49 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:49 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:59:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:49 localhost podman[302106]: Dec 5 04:59:49 localhost podman[302106]: 2025-12-05 09:59:49.854819187 +0000 UTC m=+0.070744155 container create 31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_proskuriakova, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, name=rhceph, ceph=True, distribution-scope=public, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 5 04:59:49 localhost podman[239519]: time="2025-12-05T09:59:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 04:59:49 localhost systemd[1]: Started libpod-conmon-31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a.scope. Dec 5 04:59:49 localhost systemd[1]: Started libcrun container. Dec 5 04:59:49 localhost podman[302106]: 2025-12-05 09:59:49.922548757 +0000 UTC m=+0.138473715 container init 31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_proskuriakova, release=1763362218, name=rhceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=) Dec 5 04:59:49 localhost podman[302106]: 2025-12-05 09:59:49.829425712 +0000 UTC m=+0.045350710 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 04:59:49 localhost focused_proskuriakova[302121]: 167 167 Dec 5 04:59:49 localhost podman[302106]: 2025-12-05 09:59:49.935943781 +0000 UTC m=+0.151868739 container start 31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_proskuriakova, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, release=1763362218, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Dec 5 04:59:49 localhost podman[302106]: 2025-12-05 09:59:49.936147317 +0000 UTC m=+0.152072275 container attach 31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_proskuriakova, release=1763362218, name=rhceph, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7) Dec 5 04:59:49 localhost systemd[1]: libpod-31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a.scope: Deactivated successfully. Dec 5 04:59:49 localhost podman[239519]: @ - - [05/Dec/2025:09:59:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156077 "" "Go-http-client/1.1" Dec 5 04:59:49 localhost podman[302106]: 2025-12-05 09:59:49.989147743 +0000 UTC m=+0.205072711 container died 31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_proskuriakova, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Dec 5 04:59:50 localhost systemd[1]: var-lib-containers-storage-overlay-d01ff92b7c1298c9d2390e0994c6c1e56eea5a5b9d0c36067f4d7e45f7714bc7-merged.mount: Deactivated successfully. Dec 5 04:59:50 localhost podman[239519]: @ - - [05/Dec/2025:09:59:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19056 "" "Go-http-client/1.1" Dec 5 04:59:50 localhost systemd[1]: var-lib-containers-storage-overlay-ec130e2fac92982a7d376f5c8467815fcb8b99d9bec6c397e9d4259186b81c74-merged.mount: Deactivated successfully. Dec 5 04:59:50 localhost podman[302126]: 2025-12-05 09:59:50.100090206 +0000 UTC m=+0.151059353 container remove 31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_proskuriakova, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 5 04:59:50 localhost systemd[1]: libpod-conmon-31afa2a2d2d027acd65791978ec74f8f6cd6a9b2690c2a058eae6f581199945a.scope: Deactivated successfully. Dec 5 04:59:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 04:59:50 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.179630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790179716, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 891, "num_deletes": 252, "total_data_size": 1118338, "memory_usage": 1135624, "flush_reason": "Manual Compaction"} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790188169, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 780071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17650, "largest_seqno": 18536, "table_properties": {"data_size": 775749, "index_size": 1857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11891, "raw_average_key_size": 21, "raw_value_size": 766300, "raw_average_value_size": 1413, "num_data_blocks": 79, "num_entries": 542, "num_filter_entries": 542, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928770, "oldest_key_time": 1764928770, "file_creation_time": 1764928790, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 8573 microseconds, and 3563 cpu microseconds. Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.188212) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 780071 bytes OK Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.188232) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.189878) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.189896) EVENT_LOG_v1 {"time_micros": 1764928790189890, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.189917) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1113529, prev total WAL file size 1113529, number of live WAL files 2. Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.190580) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(761KB)], [27(17MB)] Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790190652, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19576581, "oldest_snapshot_seqno": -1} Dec 5 04:59:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 04:59:50 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:59:50 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11030 keys, 15486850 bytes, temperature: kUnknown Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790267389, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15486850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15424575, "index_size": 33630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 297780, "raw_average_key_size": 26, "raw_value_size": 15236895, "raw_average_value_size": 1381, "num_data_blocks": 1266, "num_entries": 11030, "num_filter_entries": 11030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928790, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.267662) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15486850 bytes Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.269858) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.8 rd, 201.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 17.9 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(44.9) write-amplify(19.9) OK, records in: 11563, records dropped: 533 output_compression: NoCompression Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.269896) EVENT_LOG_v1 {"time_micros": 1764928790269870, "job": 14, "event": "compaction_finished", "compaction_time_micros": 76831, "compaction_time_cpu_micros": 34241, "output_level": 6, "num_output_files": 1, "total_output_size": 15486850, "num_input_records": 11563, "num_output_records": 11030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790270077, "job": 14, "event": "table_file_deletion", "file_number": 29} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928790272040, "job": 14, "event": "table_file_deletion", "file_number": 27} Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.190483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.272155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.272162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.272166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.272169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:50 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-09:59:50.272172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 04:59:50 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 04:59:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:50 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:50 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 04:59:50 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:50 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:50 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:50 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:50 localhost nova_compute[280228]: 2025-12-05 09:59:50.364 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:59:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:59:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:51 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 04:59:51 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 04:59:51 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:51 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:51 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 04:59:52 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 04:59:52 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 04:59:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:59:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:59:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:53 localhost nova_compute[280228]: 2025-12-05 09:59:53.233 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:59:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:59:53 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:53 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:53 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 04:59:53 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 04:59:53 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 04:59:53 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 04:59:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:59:54 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:59:54 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 04:59:54 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:54 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 04:59:54 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:54 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 04:59:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 04:59:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 04:59:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 04:59:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 04:59:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:55 localhost nova_compute[280228]: 2025-12-05 09:59:55.411 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:55 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 04:59:55 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 04:59:55 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:55 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:55 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:55 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 04:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 04:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 04:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 04:59:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:59:56 localhost podman[302142]: 2025-12-05 09:59:56.184113131 +0000 UTC m=+0.069799695 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 04:59:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:59:56 localhost podman[302143]: 2025-12-05 09:59:56.20349302 +0000 UTC m=+0.086355987 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Dec 5 04:59:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:56 localhost podman[302142]: 2025-12-05 09:59:56.219727691 +0000 UTC m=+0.105414255 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 04:59:56 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 04:59:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 5 04:59:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:56 localhost podman[302144]: 2025-12-05 09:59:56.26440271 +0000 UTC m=+0.143938204 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 04:59:56 localhost podman[302143]: 2025-12-05 09:59:56.285644675 +0000 UTC m=+0.168507672 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 5 04:59:56 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 04:59:56 localhost podman[302144]: 2025-12-05 09:59:56.299284346 +0000 UTC m=+0.178819800 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 5 04:59:56 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 04:59:56 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 04:59:56 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 04:59:56 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:56 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 04:59:56 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:56 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 04:59:57 localhost openstack_network_exporter[241668]: ERROR 09:59:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:59:57 localhost openstack_network_exporter[241668]: ERROR 09:59:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 04:59:57 localhost openstack_network_exporter[241668]: ERROR 09:59:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 04:59:57 localhost openstack_network_exporter[241668]: ERROR 09:59:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 04:59:57 localhost openstack_network_exporter[241668]: Dec 5 04:59:57 localhost openstack_network_exporter[241668]: ERROR 09:59:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 04:59:57 localhost openstack_network_exporter[241668]: Dec 5 04:59:57 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 04:59:57 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 04:59:57 localhost ceph-mon[292820]: Deploying daemon mon.np0005546421 on np0005546421.localdomain Dec 5 04:59:58 localhost nova_compute[280228]: 2025-12-05 09:59:58.235 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 04:59:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 04:59:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).monmap v14 adding/updating np0005546421 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster Dec 5 04:59:59 localhost ceph-mgr[286454]: ms_deliver_dispatch: unhandled message 0x5574796f5080 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Dec 5 04:59:59 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 calling monitor election Dec 5 04:59:59 localhost ceph-mon[292820]: paxos.0).electionLogic(64) init, last seen epoch 64 Dec 5 04:59:59 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 05:00:00 localhost nova_compute[280228]: 2025-12-05 10:00:00.414 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:03 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e15 handle_auth_request failed to assign global_id Dec 5 05:00:03 localhost nova_compute[280228]: 2025-12-05 10:00:03.236 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:03 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e15 handle_auth_request failed to assign global_id Dec 5 05:00:03 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e15 handle_auth_request failed to assign global_id Dec 5 05:00:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:00:03.907 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:00:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:00:03.908 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:00:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:00:03.909 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:00:04 localhost ceph-mds[283215]: mds.beacon.mds.np0005546419.rweotn missed beacon ack from the monitors Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e15 handle_auth_request failed to assign global_id Dec 5 05:00:04 localhost ceph-mon[292820]: paxos.0).electionLogic(65) init, last seen epoch 65, mid-election, bumping Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419@0(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : mon.np0005546419 is new leader, mons np0005546419,np0005546420,np0005546421 in quorum (ranks 0,1,2) Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : monmap epoch 15 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : fsid 79feddb1-4bfc-557f-83b9-0d57c9f66c1b Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : last_changed 2025-12-05T09:59:59.724612+0000 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : created 2025-12-05T07:49:07.934655+0000 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : election_strategy: 1 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005546419 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005546420 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005546421 Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005546420.eqhasr=up:active} 2 up:standby Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e35: np0005546420.aoeylc(active, since 83s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:04 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419 calling monitor election Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546420 calling monitor election Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546421 calling monitor election Dec 5 05:00:04 localhost ceph-mon[292820]: mon.np0005546419 is new leader, mons np0005546419,np0005546420,np0005546421 in quorum (ranks 0,1,2) Dec 5 05:00:04 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 05:00:04 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:00:05 localhost podman[302199]: 2025-12-05 10:00:05.215516979 +0000 UTC m=+0.093051303 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:00:05 localhost podman[302199]: 2025-12-05 10:00:05.224600519 +0000 UTC m=+0.102134823 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:00:05 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:00:05 localhost podman[302198]: 2025-12-05 10:00:05.315612048 +0000 UTC m=+0.195511555 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:00:05 localhost podman[302198]: 2025-12-05 10:00:05.383904675 +0000 UTC m=+0.263804132 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:00:05 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:00:05 localhost nova_compute[280228]: 2025-12-05 10:00:05.416 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:05 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:05 localhost ceph-mon[292820]: Reconfiguring osd.5 (monmap changed)... Dec 5 05:00:05 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 05:00:05 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 05:00:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 05:00:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:00:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3768139666' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:00:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:00:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3768139666' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:00:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:07 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:07 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:07 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546421.tuudjq (monmap changed)... Dec 5 05:00:07 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:07 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546421.tuudjq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:07 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546421.tuudjq on np0005546421.localdomain Dec 5 05:00:07 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 05:00:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:08 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:08 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546421.sukfea (monmap changed)... Dec 5 05:00:08 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:08 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546421.sukfea", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:08 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546421.sukfea on np0005546421.localdomain Dec 5 05:00:08 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:08 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:08 localhost nova_compute[280228]: 2025-12-05 10:00:08.264 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:08 localhost nova_compute[280228]: 2025-12-05 10:00:08.539 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:08 localhost nova_compute[280228]: 2025-12-05 10:00:08.540 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:09 localhost nova_compute[280228]: 2025-12-05 10:00:09.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:09 localhost nova_compute[280228]: 2025-12-05 10:00:09.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:00:09 localhost nova_compute[280228]: 2025-12-05 10:00:09.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:00:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:10 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:10 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:10 localhost nova_compute[280228]: 2025-12-05 10:00:10.455 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:11 localhost nova_compute[280228]: 2025-12-05 10:00:11.594 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:00:11 localhost nova_compute[280228]: 2025-12-05 10:00:11.594 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:00:11 localhost nova_compute[280228]: 2025-12-05 10:00:11.595 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:00:11 localhost nova_compute[280228]: 2025-12-05 10:00:11.595 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:00:11 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 05:00:12 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 05:00:12 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 05:00:12 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:00:12 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:00:12 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:00:12 localhost ceph-mon[292820]: Reconfig service osd.default_drive_group Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 05:00:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.949 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.955 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cf11d647-12f2-40e1-b283-f2390e7137fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:12.950440', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3101660a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '4a3249463c3041b57c4f63456fc56a393e912ea8297a5ca6cfb83441a46c50ff'}]}, 'timestamp': '2025-12-05 10:00:12.955598', '_unique_id': '5dcaeca4435247aeb245d1fac178ab43'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.956 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.957 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.957 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6b93ae83-0c80-4636-9f10-126c65fea6e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:12.957737', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3101cf5a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': 'be9587323b39bf9210a21ca9f3aead8859f5194928ab89ea4cc1852dc7642ea6'}]}, 'timestamp': '2025-12-05 10:00:12.958278', '_unique_id': '006ae1be3bf646bea8395c86675a86cd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.958 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.959 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.959 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '24628a36-550e-4594-ad6c-32ea8c5b481d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:12.959750', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '3102167c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '21ca3a4d72ce7598530176e394e28f82f0a9a0283d0f387e339d0b97f843f49b'}]}, 'timestamp': '2025-12-05 10:00:12.960078', '_unique_id': 'b4d99272aa404415a732895d44993182'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.960 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.961 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.971 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.972 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '94503b80-7fba-41e9-b4a3-aecc75e2f50a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:12.961619', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3103f42e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.135907661, 'message_signature': '7d269bf8cb2bde253ad970b1de0e85193993e5701ce047ee112114ed88995bd3'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:12.961619', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3104014e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.135907661, 'message_signature': 'be68cab3f67cc6d81ddfbce0fceb72c3997b0279b38c2e2a000a1606fa83c1ff'}]}, 'timestamp': '2025-12-05 10:00:12.972618', '_unique_id': '7e46fab2179249258cf4f3aa65dae742'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.973 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.974 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.998 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:12.998 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aa09ce41-1917-4b35-9d96-23eab1ddeb34', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:12.974509', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310802e4-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': 'a9d863326773f59bd2c94c922ffaa239309ee7d77e9e822f123108354adccef7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:12.974509', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3108102c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': 'c98b692c08dcd88fdf903226a72c59b67a1f973fbc51b725b2fabdd6617a67d7'}]}, 'timestamp': '2025-12-05 10:00:12.999217', '_unique_id': '8464b46f902e4ec692e4537b60607b15'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.001 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.001 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 13690000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '268f308e-92ef-49ba-9670-bac27666c8ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13690000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:00:13.001335', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '310b15ec-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.192738615, 'message_signature': '64438792014795ed8e177848987c41478b5a5520126213aec0f7e66007538dbb'}]}, 'timestamp': '2025-12-05 10:00:13.019080', '_unique_id': 'cdbf801c32ff4f9fbb78745e574df3c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.021 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74a340c5-2407-49c3-904f-57da10411f01', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.021018', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310b7212-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '5ab37e1a659c70f7dc39830011c0ee0152e80f96148072d03cb671b0888e67a2'}]}, 'timestamp': '2025-12-05 10:00:13.021426', '_unique_id': 'a09275216b59482e91e6a6d196048715'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.023 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a104bd6-6109-430d-b36c-18ed1983dd8b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.022903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310bb92a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '182cdd1d41b885d02d9d42409831dd4e8e044782c99e526873b97a9284444f4a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.022903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310bc55a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '0d4af0a77123adc3510c7c271df7761a7d959d3ffa68694c6ba45d9353b782c5'}]}, 'timestamp': '2025-12-05 10:00:13.023506', '_unique_id': 'bc37af61f2a34a5bacdbede35a72fb9a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.025 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1d9552a-5e37-4068-bafe-6f0cf0c9df4a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.025122', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310c1168-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '6c52a87228c46e6e0a4de18930dd2ce26d05152fde6d3e1de887a705e7b03121'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.025122', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310c1c30-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '895a59977214392f9ceeca58dd9f36e8bb23f5c3bf212ea5a53657102b60fe26'}]}, 'timestamp': '2025-12-05 10:00:13.025728', '_unique_id': '85576aa5637542c1998b9b29c06132d6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.026 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.027 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.027 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.027 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3653264d-130a-49bb-b518-bb3398458c6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.027552', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310c6eba-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '103ee7ddbf241d326231b1e7fe279843e3010265994621de4e47cfe1aed955e8'}]}, 'timestamp': '2025-12-05 10:00:13.027885', '_unique_id': '549f1dbf1bdb495787f673dbbaa759e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.029 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa376e55-102c-4baf-bd9d-1cfe7a3a158b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.029427', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310cb80c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '04ae6ea41304f3dbe5e4120b93734b980165172b0bee9f524da82cd7b3c8e357'}]}, 'timestamp': '2025-12-05 10:00:13.029738', '_unique_id': '987b22a44e3342f7993d16cac6fea5b2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.031 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.031 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d25e342-aea5-426b-b7fd-c5d16febe4f4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:00:13.031215', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '310cff10-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.192738615, 'message_signature': 'b412e471000275e44187bd0e015d448c55f7e4d9395253a4f41237d73f792a1e'}]}, 'timestamp': '2025-12-05 10:00:13.031548', '_unique_id': '11e4d3e9bdbd4fe4af05d9b91dae99b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b574d18-97c5-47c9-99ef-ea31d5dd7304', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.032986', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310d44b6-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '6e3e851f407ac8c01132bb4334d115402c5027bef60d06a1d715826181e3910a'}]}, 'timestamp': '2025-12-05 10:00:13.033366', '_unique_id': 'e35bf1bfddd14fa59849004187608949'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.033 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.034 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bd60bc06-9c9e-49a5-9798-f4672c272b5c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.034993', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310d9164-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.135907661, 'message_signature': 'cda1f0f7bc4bf18b00bdb4dfb9a936f7855eaf310664d5cf15be59a0226b7e7b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.034993', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310d9d80-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.135907661, 'message_signature': 'b8b7bbe573a48457a66cbe93c4cbe8a0627e43eaf583f643c3130dd58d955b90'}]}, 'timestamp': '2025-12-05 10:00:13.035596', '_unique_id': '1c0ccd23a0764294b9f135663f6941d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.037 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c07ac4e2-cf21-4322-a52f-adec5b8984a6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.037710', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310dfd2a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '39feef6d50c8210dc3a3f3a855a0fe792e759f031437f50aa8a5e13e36d91c04'}]}, 'timestamp': '2025-12-05 10:00:13.038111', '_unique_id': '7124fa4e398a44de837cfc630f31ec25'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.039 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b9ce7589-c919-4787-97ae-68e79efc1c0a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.039885', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310e50ae-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': 'afee432f257503dfc571b30dd347bd5e1a7a9341d1d9acc45a9b05abc2d6e303'}]}, 'timestamp': '2025-12-05 10:00:13.040203', '_unique_id': 'defabb9d3e97451293d02b3458c792ff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.041 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9fc8a383-9595-4226-b9b6-ae237a618e17', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.041763', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310e9992-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.135907661, 'message_signature': 'ef7389df80ec187552a26ceee43f0ebddd17cd5d14a14f4108ee159752e517d8'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.041763', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310ea3d8-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.135907661, 'message_signature': 'f48c4c995428e77cddbcd0ade2d82eeef33ff45468fd93a70d1085aef3330c82'}]}, 'timestamp': '2025-12-05 10:00:13.042329', '_unique_id': '98fca7a681464386837080c41b324b86'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.043 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20a2e404-c7fd-46f8-ae7b-fe94f06fcae3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.043730', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310ee64a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '6b5ea5e25ee6efbd4d5af1272dc914396d7936469700fb61dc9433dd95627210'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.043730', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310ef04a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '21175f428c57e596fa537297dc290bf92dfe35be260cab5327c33fd6c798f40e'}]}, 'timestamp': '2025-12-05 10:00:13.044282', '_unique_id': '8aaff4e7b9674d529d806b0ac7b022f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8459d05f-f1f0-47d3-9400-8f52352a7869', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.045696', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310f330c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': 'fbe4eec8fd0de7b342452fec4d8a89b09602d4de1d6eee71ff99e1568349f589'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.045696', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310f3d2a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '5902cd1b1284eb1e2a733680f0cd6efc4c405a30d4634139c8a8a1d72e7df621'}]}, 'timestamp': '2025-12-05 10:00:13.046228', '_unique_id': '30c4352e5ef842e899b943392f41724c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b8d30b8-478f-4409-ae77-2b60e9f4087d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:00:13.047702', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '310f81ea-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.124723826, 'message_signature': '7beee7c557d06ac4938e3aadc58445b0b4bee69c6b6602ff16f5a5261a04b264'}]}, 'timestamp': '2025-12-05 10:00:13.048014', '_unique_id': '165532fd44db4a1e8c5af3ac6dd64c0f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.049 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.049 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.049 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.049 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '65505640-47d8-43f0-8f9a-995273cce990', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:00:13.049612', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '310fcc22-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': 'def0ff59364a9428773344ee433628f0be378e501a4e788cd214c790eb721010'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:00:13.049612', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '310fd668-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 11928.14882487, 'message_signature': '915e1b2dec9164093ea29707d5ab9dead60375eadb175b58d95c6bb4fc3ad2d0'}]}, 'timestamp': '2025-12-05 10:00:13.050152', '_unique_id': 'b70c45a72087418f8d426147f540e1f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:00:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:00:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.266 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:13 localhost podman[302704]: Dec 5 05:00:13 localhost podman[302704]: 2025-12-05 10:00:13.430262631 +0000 UTC m=+0.060209179 container create 1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shirley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Dec 5 05:00:13 localhost systemd[1]: Started libpod-conmon-1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a.scope. Dec 5 05:00:13 localhost systemd[1]: Started libcrun container. Dec 5 05:00:13 localhost podman[302704]: 2025-12-05 10:00:13.489034524 +0000 UTC m=+0.118981052 container init 1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shirley, GIT_CLEAN=True, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, ceph=True, vcs-type=git) Dec 5 05:00:13 localhost podman[302704]: 2025-12-05 10:00:13.501780238 +0000 UTC m=+0.131726766 container start 1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shirley, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 5 05:00:13 localhost podman[302704]: 2025-12-05 10:00:13.502702156 +0000 UTC m=+0.132648704 container attach 1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shirley, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 05:00:13 localhost elated_shirley[302719]: 167 167 Dec 5 05:00:13 localhost systemd[1]: libpod-1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a.scope: Deactivated successfully. Dec 5 05:00:13 localhost podman[302704]: 2025-12-05 10:00:13.505643417 +0000 UTC m=+0.135589955 container died 1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shirley, build-date=2025-11-26T19:44:28Z, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main) Dec 5 05:00:13 localhost podman[302704]: 2025-12-05 10:00:13.412139912 +0000 UTC m=+0.042086450 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:13 localhost podman[302724]: 2025-12-05 10:00:13.565423182 +0000 UTC m=+0.053948456 container remove 1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_shirley, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main) Dec 5 05:00:13 localhost systemd[1]: libpod-conmon-1fe99d37aa7d02f6a28770ac3506774d4bbf63d886fdfe822dae49a1378cfe4a.scope: Deactivated successfully. Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.678 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.692 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.692 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.692 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.693 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.693 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.693 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.694 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e36: np0005546420.aoeylc(active, since 92s), standbys: np0005546421.sukfea, np0005546418.garyvl, np0005546419.zhsnqq Dec 5 05:00:13 localhost ceph-mon[292820]: Reconfiguring crash.np0005546419 (monmap changed)... Dec 5 05:00:13 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:13 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546419.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:13 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546419 on np0005546419.localdomain Dec 5 05:00:13 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:13 localhost ceph-mon[292820]: from='mgr.26606 ' entity='mgr.np0005546420.aoeylc' Dec 5 05:00:13 localhost ceph-mon[292820]: from='mgr.26606 172.18.0.107:0/3175631226' entity='mgr.np0005546420.aoeylc' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.801 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.801 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.801 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.801 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:00:13 localhost nova_compute[280228]: 2025-12-05 10:00:13.801 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/464711851' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e89 do_prune osdmap full prune enabled Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Activating manager daemon np0005546421.sukfea Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 e90: 6 total, 6 up, 6 in Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e90: 6 total, 6 up, 6 in Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/464711851' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e37: np0005546421.sukfea(active, starting, since 0.0513066s), standbys: np0005546418.garyvl, np0005546419.zhsnqq Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Manager daemon np0005546421.sukfea is now available Dec 5 05:00:13 localhost systemd-logind[760]: Session 70 logged out. Waiting for processes to exit. Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} v 0) Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"}]': finished Dec 5 05:00:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} v 0) Dec 5 05:00:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"}]': finished Dec 5 05:00:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/mirror_snapshot_schedule"} v 0) Dec 5 05:00:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/mirror_snapshot_schedule"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/trash_purge_schedule"} v 0) Dec 5 05:00:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/trash_purge_schedule"} : dispatch Dec 5 05:00:14 localhost sshd[302812]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:00:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:00:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2624193339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.311 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:00:14 localhost podman[302813]: Dec 5 05:00:14 localhost podman[302813]: 2025-12-05 10:00:14.351673597 +0000 UTC m=+0.100726759 container create b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_carson, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, version=7, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 05:00:14 localhost systemd-logind[760]: New session 71 of user ceph-admin. Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.377 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:00:14 localhost systemd[1]: Started Session 71 of User ceph-admin. Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.377 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:00:14 localhost systemd[1]: Started libpod-conmon-b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5.scope. Dec 5 05:00:14 localhost podman[302813]: 2025-12-05 10:00:14.311589891 +0000 UTC m=+0.060643113 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:14 localhost systemd[1]: Started libcrun container. Dec 5 05:00:14 localhost podman[302813]: 2025-12-05 10:00:14.430657845 +0000 UTC m=+0.179710997 container init b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_carson, release=1763362218, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main) Dec 5 05:00:14 localhost systemd[1]: tmp-crun.46wXno.mount: Deactivated successfully. Dec 5 05:00:14 localhost systemd[1]: var-lib-containers-storage-overlay-bf328f47b0864ef4243d177924d09f5c6f6a756a96588e6ea70265e500794ad5-merged.mount: Deactivated successfully. Dec 5 05:00:14 localhost podman[302813]: 2025-12-05 10:00:14.439656682 +0000 UTC m=+0.188709834 container start b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_carson, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 05:00:14 localhost podman[302813]: 2025-12-05 10:00:14.440426227 +0000 UTC m=+0.189479379 container attach b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_carson, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7) Dec 5 05:00:14 localhost priceless_carson[302833]: 167 167 Dec 5 05:00:14 localhost podman[302813]: 2025-12-05 10:00:14.44442995 +0000 UTC m=+0.193483152 container died b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_carson, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_BRANCH=main) Dec 5 05:00:14 localhost systemd[1]: libpod-b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5.scope: Deactivated successfully. Dec 5 05:00:14 localhost systemd[1]: var-lib-containers-storage-overlay-b52418b895d49443ea42644b694481cb66d82b8c59d9adbe751dccd4906336ed-merged.mount: Deactivated successfully. Dec 5 05:00:14 localhost podman[302839]: 2025-12-05 10:00:14.518581399 +0000 UTC m=+0.067272737 container remove b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_carson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, GIT_BRANCH=main, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Dec 5 05:00:14 localhost systemd[1]: libpod-conmon-b5e89b1796c41957e6368da0ed88cc167b4dbd722857962d23d5ddeee058a4f5.scope: Deactivated successfully. Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.554 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.557 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11663MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.557 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.558 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.614 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.614 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.615 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:00:14 localhost nova_compute[280228]: 2025-12-05 10:00:14.661 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:00:14 localhost systemd[1]: session-70.scope: Deactivated successfully. Dec 5 05:00:14 localhost systemd[1]: session-70.scope: Consumed 23.580s CPU time. Dec 5 05:00:14 localhost systemd-logind[760]: Removed session 70. Dec 5 05:00:14 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/464711851' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: Activating manager daemon np0005546421.sukfea Dec 5 05:00:14 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/464711851' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 05:00:14 localhost ceph-mon[292820]: Manager daemon np0005546421.sukfea is now available Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: removing stray HostCache host record np0005546418.localdomain.devices.0 Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"}]': finished Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005546418.localdomain.devices.0"}]': finished Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/mirror_snapshot_schedule"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/mirror_snapshot_schedule"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/trash_purge_schedule"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546421.sukfea/trash_purge_schedule"} : dispatch Dec 5 05:00:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e38: np0005546421.sukfea(active, since 1.05362s), standbys: np0005546418.garyvl, np0005546419.zhsnqq Dec 5 05:00:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:00:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4060403813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.097 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.103 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.115 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.117 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.117 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:00:15 localhost podman[302990]: 2025-12-05 10:00:15.422499575 +0000 UTC m=+0.090516694 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, version=7, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public) Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.492 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:15 localhost podman[302990]: 2025-12-05 10:00:15.566712296 +0000 UTC m=+0.234729355 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph) Dec 5 05:00:15 localhost ceph-mon[292820]: [05/Dec/2025:10:00:15] ENGINE Bus STARTING Dec 5 05:00:15 localhost ceph-mon[292820]: [05/Dec/2025:10:00:15] ENGINE Serving on https://172.18.0.108:7150 Dec 5 05:00:15 localhost ceph-mon[292820]: [05/Dec/2025:10:00:15] ENGINE Client ('172.18.0.108', 32990) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 05:00:15 localhost ceph-mon[292820]: [05/Dec/2025:10:00:15] ENGINE Serving on http://172.18.0.108:8765 Dec 5 05:00:15 localhost ceph-mon[292820]: [05/Dec/2025:10:00:15] ENGINE Bus STARTED Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.931 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:00:15 localhost nova_compute[280228]: 2025-12-05 10:00:15.931 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:00:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:00:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:00:16 localhost podman[303126]: 2025-12-05 10:00:16.466267158 +0000 UTC m=+0.092518226 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, config_id=edpm, version=9.6, io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350) Dec 5 05:00:16 localhost podman[303126]: 2025-12-05 10:00:16.483280003 +0000 UTC m=+0.109531041 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:00:16 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:00:16 localhost podman[303159]: 2025-12-05 10:00:16.569233356 +0000 UTC m=+0.093678903 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:00:16 localhost podman[303159]: 2025-12-05 10:00:16.608690354 +0000 UTC m=+0.133135911 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:00:16 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:00:16 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:16 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e39: np0005546421.sukfea(active, since 3s), standbys: np0005546418.garyvl, np0005546419.zhsnqq Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:00:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:00:18 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:00:18 localhost nova_compute[280228]: 2025-12-05 10:00:18.268 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:18 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e40: np0005546421.sukfea(active, since 5s), standbys: np0005546418.garyvl, np0005546419.zhsnqq Dec 5 05:00:19 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:00:19 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:00:19 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:00:19 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:00:19 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:00:19 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:00:19 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 05:00:19 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 05:00:19 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 05:00:19 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : Standby manager daemon np0005546420.aoeylc started Dec 5 05:00:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:19 localhost podman[239519]: time="2025-12-05T10:00:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:00:19 localhost podman[239519]: @ - - [05/Dec/2025:10:00:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:00:19 localhost podman[239519]: @ - - [05/Dec/2025:10:00:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18730 "" "Go-http-client/1.1" Dec 5 05:00:20 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:00:20 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:00:20 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:00:20 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:00:20 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:00:20 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e41: np0005546421.sukfea(active, since 6s), standbys: np0005546418.garyvl, np0005546419.zhsnqq, np0005546420.aoeylc Dec 5 05:00:20 localhost nova_compute[280228]: 2025-12-05 10:00:20.494 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:00:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:00:21 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:00:21 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:21 localhost podman[303981]: Dec 5 05:00:21 localhost podman[303981]: 2025-12-05 10:00:21.791915238 +0000 UTC m=+0.082167876 container create a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_gates, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218) Dec 5 05:00:21 localhost systemd[1]: Started libpod-conmon-a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750.scope. Dec 5 05:00:21 localhost podman[303981]: 2025-12-05 10:00:21.7456576 +0000 UTC m=+0.035910248 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:21 localhost systemd[1]: Started libcrun container. Dec 5 05:00:21 localhost podman[303981]: 2025-12-05 10:00:21.876107657 +0000 UTC m=+0.166360335 container init a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_gates, name=rhceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:00:21 localhost ceph-mon[292820]: log_channel(cluster) log [WRN] : Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 5 05:00:21 localhost ceph-mon[292820]: log_channel(cluster) log [WRN] : Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 5 05:00:21 localhost podman[303981]: 2025-12-05 10:00:21.898713974 +0000 UTC m=+0.188966612 container start a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_gates, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, RELEASE=main) Dec 5 05:00:21 localhost podman[303981]: 2025-12-05 10:00:21.899319933 +0000 UTC m=+0.189572621 container attach a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_gates, GIT_CLEAN=True, version=7, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:00:21 localhost determined_gates[303996]: 167 167 Dec 5 05:00:21 localhost systemd[1]: libpod-a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750.scope: Deactivated successfully. Dec 5 05:00:21 localhost podman[303981]: 2025-12-05 10:00:21.906073922 +0000 UTC m=+0.196326560 container died a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_gates, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1763362218, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 05:00:22 localhost podman[304001]: 2025-12-05 10:00:22.006045656 +0000 UTC m=+0.085758127 container remove a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_gates, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, name=rhceph, release=1763362218, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 5 05:00:22 localhost systemd[1]: libpod-conmon-a2a24774d195f1001eb48f1760b069eda6d369d8d88280b0e0a60a952b32c750.scope: Deactivated successfully. Dec 5 05:00:22 localhost ceph-mon[292820]: Reconfiguring osd.0 (monmap changed)... Dec 5 05:00:22 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 5 05:00:22 localhost ceph-mon[292820]: Reconfiguring daemon osd.0 on np0005546419.localdomain Dec 5 05:00:22 localhost ceph-mon[292820]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 5 05:00:22 localhost ceph-mon[292820]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 5 05:00:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:22 localhost systemd[1]: tmp-crun.z27PQi.mount: Deactivated successfully. Dec 5 05:00:22 localhost systemd[1]: var-lib-containers-storage-overlay-daa55c2861239dd8bfd9b40ac2eed679d08cd3c3e0d6335e8cf81a1a2af25640-merged.mount: Deactivated successfully. Dec 5 05:00:22 localhost podman[304075]: Dec 5 05:00:22 localhost podman[304075]: 2025-12-05 10:00:22.874362214 +0000 UTC m=+0.084254141 container create 3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_hoover, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Dec 5 05:00:22 localhost systemd[1]: Started libpod-conmon-3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e.scope. Dec 5 05:00:22 localhost systemd[1]: Started libcrun container. Dec 5 05:00:22 localhost podman[304075]: 2025-12-05 10:00:22.839107487 +0000 UTC m=+0.048999614 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:22 localhost podman[304075]: 2025-12-05 10:00:22.945635205 +0000 UTC m=+0.155527102 container init 3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_hoover, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container) Dec 5 05:00:22 localhost podman[304075]: 2025-12-05 10:00:22.954861969 +0000 UTC m=+0.164753886 container start 3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_hoover, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 5 05:00:22 localhost podman[304075]: 2025-12-05 10:00:22.955079446 +0000 UTC m=+0.164971353 container attach 3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_hoover, build-date=2025-11-26T19:44:28Z, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=) Dec 5 05:00:22 localhost eloquent_hoover[304090]: 167 167 Dec 5 05:00:22 localhost systemd[1]: libpod-3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e.scope: Deactivated successfully. Dec 5 05:00:22 localhost podman[304075]: 2025-12-05 10:00:22.958831062 +0000 UTC m=+0.168722979 container died 3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_hoover, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, release=1763362218) Dec 5 05:00:23 localhost podman[304095]: 2025-12-05 10:00:23.065555496 +0000 UTC m=+0.093318402 container remove 3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_hoover, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True) Dec 5 05:00:23 localhost systemd[1]: libpod-conmon-3e76882c62b14e8e009c449098227e1349efa49d1d9f83909569ef66fe3bb85e.scope: Deactivated successfully. Dec 5 05:00:23 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: Reconfiguring osd.3 (monmap changed)... Dec 5 05:00:23 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 5 05:00:23 localhost ceph-mon[292820]: Reconfiguring daemon osd.3 on np0005546419.localdomain Dec 5 05:00:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost nova_compute[280228]: 2025-12-05 10:00:23.270 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 05:00:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:23 localhost systemd[1]: var-lib-containers-storage-overlay-5bcd6fc36ca4ad395e987949a91ecf39665f2c0441d318af8801b1e47a4446c7-merged.mount: Deactivated successfully. Dec 5 05:00:23 localhost podman[304169]: Dec 5 05:00:23 localhost podman[304169]: 2025-12-05 10:00:23.898235404 +0000 UTC m=+0.078025589 container create a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_curran, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph) Dec 5 05:00:23 localhost systemd[1]: Started libpod-conmon-a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f.scope. Dec 5 05:00:23 localhost systemd[1]: Started libcrun container. Dec 5 05:00:23 localhost podman[304169]: 2025-12-05 10:00:23.86830464 +0000 UTC m=+0.048094885 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:23 localhost podman[304169]: 2025-12-05 10:00:23.970648919 +0000 UTC m=+0.150439104 container init a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_curran, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True) Dec 5 05:00:23 localhost podman[304169]: 2025-12-05 10:00:23.983115323 +0000 UTC m=+0.162905508 container start a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_curran, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux ) Dec 5 05:00:23 localhost podman[304169]: 2025-12-05 10:00:23.983849025 +0000 UTC m=+0.163639210 container attach a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_curran, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=1763362218) Dec 5 05:00:23 localhost sad_curran[304184]: 167 167 Dec 5 05:00:23 localhost systemd[1]: libpod-a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f.scope: Deactivated successfully. Dec 5 05:00:23 localhost podman[304169]: 2025-12-05 10:00:23.98785651 +0000 UTC m=+0.167646725 container died a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_curran, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, release=1763362218, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=) Dec 5 05:00:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:00:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost podman[304189]: 2025-12-05 10:00:24.095220653 +0000 UTC m=+0.092884708 container remove a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_curran, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, release=1763362218) Dec 5 05:00:24 localhost systemd[1]: libpod-conmon-a6f771dbf763f71142d5240a8db5359925236aa047b1e9b502566dd343fa807f.scope: Deactivated successfully. Dec 5 05:00:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 05:00:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:24 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546419.rweotn (monmap changed)... Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546419.rweotn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:24 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546419.rweotn on np0005546419.localdomain Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:24 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546419.zhsnqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:24 localhost systemd[1]: var-lib-containers-storage-overlay-b86cad5a156bef26e207afac5386881ff24fffdf9baae1c075d965edb8034a26-merged.mount: Deactivated successfully. Dec 5 05:00:24 localhost podman[304257]: Dec 5 05:00:24 localhost podman[304257]: 2025-12-05 10:00:24.821460326 +0000 UTC m=+0.067545636 container create 2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_shockley, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:00:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:24 localhost systemd[1]: Started libpod-conmon-2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a.scope. Dec 5 05:00:24 localhost systemd[1]: Started libcrun container. Dec 5 05:00:24 localhost podman[304257]: 2025-12-05 10:00:24.788446008 +0000 UTC m=+0.034531348 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:24 localhost podman[304257]: 2025-12-05 10:00:24.893389136 +0000 UTC m=+0.139474426 container init 2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_shockley, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, release=1763362218, io.buildah.version=1.41.4) Dec 5 05:00:24 localhost podman[304257]: 2025-12-05 10:00:24.910758872 +0000 UTC m=+0.156844182 container start 2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_shockley, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Dec 5 05:00:24 localhost friendly_shockley[304271]: 167 167 Dec 5 05:00:24 localhost podman[304257]: 2025-12-05 10:00:24.912168806 +0000 UTC m=+0.158254126 container attach 2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_shockley, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7, release=1763362218, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7) Dec 5 05:00:24 localhost systemd[1]: libpod-2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a.scope: Deactivated successfully. Dec 5 05:00:24 localhost podman[304257]: 2025-12-05 10:00:24.913998032 +0000 UTC m=+0.160083382 container died 2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_shockley, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 05:00:25 localhost podman[304278]: 2025-12-05 10:00:25.005390862 +0000 UTC m=+0.081257349 container remove 2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_shockley, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , architecture=x86_64, vendor=Red Hat, Inc.) Dec 5 05:00:25 localhost systemd[1]: libpod-conmon-2bb5748021f5550c6fbca652fcf8352e81cb55f140163b9ae31860c60a52992a.scope: Deactivated successfully. Dec 5 05:00:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 05:00:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:25 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546419.zhsnqq (monmap changed)... Dec 5 05:00:25 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546419.zhsnqq on np0005546419.localdomain Dec 5 05:00:25 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:25 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:25 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:25 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546420.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:25 localhost nova_compute[280228]: 2025-12-05 10:00:25.497 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:25 localhost systemd[1]: tmp-crun.qDXgVG.mount: Deactivated successfully. Dec 5 05:00:25 localhost systemd[1]: var-lib-containers-storage-overlay-3f971ee0e9b4fe16fb86e9023c4f1976a9f0d6e6647ce48eda1c24b666b00667-merged.mount: Deactivated successfully. Dec 5 05:00:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:26 localhost ceph-mon[292820]: Reconfiguring crash.np0005546420 (monmap changed)... Dec 5 05:00:26 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546420 on np0005546420.localdomain Dec 5 05:00:26 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:26 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:26 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 5 05:00:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:27 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:27 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:27 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:00:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:27 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost openstack_network_exporter[241668]: ERROR 10:00:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:00:27 localhost openstack_network_exporter[241668]: ERROR 10:00:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:00:27 localhost openstack_network_exporter[241668]: ERROR 10:00:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:00:27 localhost openstack_network_exporter[241668]: ERROR 10:00:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:00:27 localhost openstack_network_exporter[241668]: Dec 5 05:00:27 localhost openstack_network_exporter[241668]: ERROR 10:00:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:00:27 localhost openstack_network_exporter[241668]: Dec 5 05:00:27 localhost systemd[1]: tmp-crun.1KTCp9.mount: Deactivated successfully. Dec 5 05:00:27 localhost podman[304296]: 2025-12-05 10:00:27.259613572 +0000 UTC m=+0.142106407 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:00:27 localhost podman[304297]: 2025-12-05 10:00:27.221544897 +0000 UTC m=+0.104419674 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 05:00:27 localhost podman[304295]: 2025-12-05 10:00:27.283116167 +0000 UTC m=+0.165550990 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:00:27 localhost ceph-mon[292820]: Reconfiguring osd.1 (monmap changed)... Dec 5 05:00:27 localhost podman[304296]: 2025-12-05 10:00:27.292755895 +0000 UTC m=+0.175248660 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:00:27 localhost ceph-mon[292820]: Reconfiguring daemon osd.1 on np0005546420.localdomain Dec 5 05:00:27 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:27 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 5 05:00:27 localhost podman[304297]: 2025-12-05 10:00:27.304682913 +0000 UTC m=+0.187557620 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:00:27 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:00:27 localhost podman[304295]: 2025-12-05 10:00:27.321807461 +0000 UTC m=+0.204242314 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:00:27 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:00:27 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:00:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost systemd[1]: tmp-crun.iLkSvp.mount: Deactivated successfully. Dec 5 05:00:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 5 05:00:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:28 localhost nova_compute[280228]: 2025-12-05 10:00:28.272 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:28 localhost ceph-mon[292820]: Reconfiguring osd.4 (monmap changed)... Dec 5 05:00:28 localhost ceph-mon[292820]: Reconfiguring daemon osd.4 on np0005546420.localdomain Dec 5 05:00:28 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:28 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:28 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005546420.eqhasr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 5 05:00:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 5 05:00:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:29 localhost ceph-mon[292820]: Reconfiguring mds.mds.np0005546420.eqhasr (monmap changed)... Dec 5 05:00:29 localhost ceph-mon[292820]: Reconfiguring daemon mds.mds.np0005546420.eqhasr on np0005546420.localdomain Dec 5 05:00:29 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:29 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:29 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:29 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005546420.aoeylc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 5 05:00:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 5 05:00:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:30 localhost ceph-mon[292820]: Reconfiguring mgr.np0005546420.aoeylc (monmap changed)... Dec 5 05:00:30 localhost ceph-mon[292820]: Reconfiguring daemon mgr.np0005546420.aoeylc on np0005546420.localdomain Dec 5 05:00:30 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:30 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:30 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:30 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005546421.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 5 05:00:30 localhost nova_compute[280228]: 2025-12-05 10:00:30.501 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 5 05:00:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:31 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:31 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:31 localhost ceph-mon[292820]: Reconfiguring crash.np0005546421 (monmap changed)... Dec 5 05:00:31 localhost ceph-mon[292820]: Reconfiguring daemon crash.np0005546421 on np0005546421.localdomain Dec 5 05:00:31 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:31 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:31 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:31 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 5 05:00:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: Saving service mon spec with placement label:mon Dec 5 05:00:32 localhost ceph-mon[292820]: Reconfiguring osd.2 (monmap changed)... Dec 5 05:00:32 localhost ceph-mon[292820]: Reconfiguring daemon osd.2 on np0005546421.localdomain Dec 5 05:00:32 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:32 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 5 05:00:33 localhost nova_compute[280228]: 2025-12-05 10:00:33.276 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:33 localhost ceph-mon[292820]: Reconfiguring daemon osd.5 on np0005546421.localdomain Dec 5 05:00:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: Reconfiguring mon.np0005546421 (monmap changed)... Dec 5 05:00:34 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 05:00:34 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546421 on np0005546421.localdomain Dec 5 05:00:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:00:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:00:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:00:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 5 05:00:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:34 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e42: np0005546421.sukfea(active, since 21s), standbys: np0005546419.zhsnqq, np0005546420.aoeylc Dec 5 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:00:35 localhost podman[304424]: 2025-12-05 10:00:35.366366884 +0000 UTC m=+0.090851905 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:00:35 localhost podman[304424]: 2025-12-05 10:00:35.375399853 +0000 UTC m=+0.099884864 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:00:35 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:00:35 localhost podman[304432]: Dec 5 05:00:35 localhost podman[304432]: 2025-12-05 10:00:35.489623258 +0000 UTC m=+0.193984228 container create c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dijkstra, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True) Dec 5 05:00:35 localhost podman[304432]: 2025-12-05 10:00:35.399179236 +0000 UTC m=+0.103540226 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:00:35 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:35 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:35 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:00:35 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:35 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:35 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 05:00:35 localhost nova_compute[280228]: 2025-12-05 10:00:35.527 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:00:35 localhost systemd[1]: Started libpod-conmon-c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652.scope. Dec 5 05:00:35 localhost systemd[1]: Started libcrun container. Dec 5 05:00:35 localhost podman[304464]: 2025-12-05 10:00:35.624599323 +0000 UTC m=+0.081116554 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 05:00:35 localhost podman[304432]: 2025-12-05 10:00:35.640610097 +0000 UTC m=+0.344971027 container init c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dijkstra, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:00:35 localhost podman[304432]: 2025-12-05 10:00:35.655753045 +0000 UTC m=+0.360114015 container start c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dijkstra, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 5 05:00:35 localhost podman[304432]: 2025-12-05 10:00:35.656073134 +0000 UTC m=+0.360434094 container attach c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dijkstra, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, release=1763362218, version=7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:00:35 localhost determined_dijkstra[304470]: 167 167 Dec 5 05:00:35 localhost systemd[1]: libpod-c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652.scope: Deactivated successfully. Dec 5 05:00:35 localhost podman[304432]: 2025-12-05 10:00:35.662195103 +0000 UTC m=+0.366556053 container died c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dijkstra, ceph=True, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, version=7, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 05:00:35 localhost podman[304464]: 2025-12-05 10:00:35.704942442 +0000 UTC m=+0.161459683 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:00:35 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:00:35 localhost podman[304495]: 2025-12-05 10:00:35.801712929 +0000 UTC m=+0.131075556 container remove c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_dijkstra, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container) Dec 5 05:00:35 localhost systemd[1]: libpod-conmon-c88fc3e3985e87ef1248be9e1d2c0f74f6799544046266d846be5b50b9480652.scope: Deactivated successfully. Dec 5 05:00:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:00:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:00:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:36 localhost systemd[1]: tmp-crun.DeOgOU.mount: Deactivated successfully. Dec 5 05:00:36 localhost systemd[1]: var-lib-containers-storage-overlay-96dadedbde7ed34412323356867c1dc98228f857fdd55b1cefd2a1ec6b64aaa9-merged.mount: Deactivated successfully. Dec 5 05:00:36 localhost ceph-mon[292820]: Reconfiguring mon.np0005546419 (monmap changed)... Dec 5 05:00:36 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546419 on np0005546419.localdomain Dec 5 05:00:36 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:36 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:36 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 5 05:00:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:00:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:00:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:37 localhost ceph-mon[292820]: Reconfiguring mon.np0005546420 (monmap changed)... Dec 5 05:00:37 localhost ceph-mon[292820]: Reconfiguring daemon mon.np0005546420 on np0005546420.localdomain Dec 5 05:00:37 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:37 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:38 localhost nova_compute[280228]: 2025-12-05 10:00:38.280 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:00:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:40 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:00:40 localhost nova_compute[280228]: 2025-12-05 10:00:40.531 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:43 localhost nova_compute[280228]: 2025-12-05 10:00:43.283 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:45 localhost nova_compute[280228]: 2025-12-05 10:00:45.574 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:00:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:00:47 localhost podman[304513]: 2025-12-05 10:00:47.207788244 +0000 UTC m=+0.089168574 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7) Dec 5 05:00:47 localhost podman[304512]: 2025-12-05 10:00:47.251614255 +0000 UTC m=+0.136749141 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:00:47 localhost podman[304513]: 2025-12-05 10:00:47.273195352 +0000 UTC m=+0.154575692 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:00:47 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:00:47 localhost podman[304512]: 2025-12-05 10:00:47.293650373 +0000 UTC m=+0.178785279 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 5 05:00:47 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:00:48 localhost nova_compute[280228]: 2025-12-05 10:00:48.286 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:49 localhost podman[239519]: time="2025-12-05T10:00:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:00:49 localhost podman[239519]: @ - - [05/Dec/2025:10:00:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:00:49 localhost podman[239519]: @ - - [05/Dec/2025:10:00:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18727 "" "Go-http-client/1.1" Dec 5 05:00:50 localhost nova_compute[280228]: 2025-12-05 10:00:50.619 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:53 localhost nova_compute[280228]: 2025-12-05 10:00:53.289 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:00:55 localhost nova_compute[280228]: 2025-12-05 10:00:55.650 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:57 localhost openstack_network_exporter[241668]: ERROR 10:00:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:00:57 localhost openstack_network_exporter[241668]: ERROR 10:00:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:00:57 localhost openstack_network_exporter[241668]: ERROR 10:00:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:00:57 localhost openstack_network_exporter[241668]: ERROR 10:00:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:00:57 localhost openstack_network_exporter[241668]: Dec 5 05:00:57 localhost openstack_network_exporter[241668]: ERROR 10:00:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:00:57 localhost openstack_network_exporter[241668]: Dec 5 05:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:00:58 localhost systemd[1]: tmp-crun.qNxP0T.mount: Deactivated successfully. Dec 5 05:00:58 localhost podman[304555]: 2025-12-05 10:00:58.213883501 +0000 UTC m=+0.092224477 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:00:58 localhost podman[304555]: 2025-12-05 10:00:58.225000615 +0000 UTC m=+0.103341571 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:00:58 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:00:58 localhost nova_compute[280228]: 2025-12-05 10:00:58.291 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:00:58 localhost podman[304554]: 2025-12-05 10:00:58.325172756 +0000 UTC m=+0.205112071 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent) Dec 5 05:00:58 localhost podman[304554]: 2025-12-05 10:00:58.336669791 +0000 UTC m=+0.216609096 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:00:58 localhost podman[304553]: 2025-12-05 10:00:58.380097492 +0000 UTC m=+0.262875794 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:00:58 localhost podman[304553]: 2025-12-05 10:00:58.39463514 +0000 UTC m=+0.277413392 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:00:58 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:00:58 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:00:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:00 localhost nova_compute[280228]: 2025-12-05 10:01:00.683 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:01:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1958821480' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:01:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:01:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1958821480' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:01:03 localhost nova_compute[280228]: 2025-12-05 10:01:03.294 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:01:03.908 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:01:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:01:03.909 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:01:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:01:03.910 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:01:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:05 localhost nova_compute[280228]: 2025-12-05 10:01:05.722 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:01:06 localhost podman[304624]: 2025-12-05 10:01:06.204978453 +0000 UTC m=+0.086910803 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:01:06 localhost podman[304625]: 2025-12-05 10:01:06.254361838 +0000 UTC m=+0.134646528 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:01:06 localhost podman[304625]: 2025-12-05 10:01:06.265660876 +0000 UTC m=+0.145945606 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:01:06 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:01:06 localhost podman[304624]: 2025-12-05 10:01:06.324931745 +0000 UTC m=+0.206864085 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:01:06 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:01:08 localhost nova_compute[280228]: 2025-12-05 10:01:08.295 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:08 localhost nova_compute[280228]: 2025-12-05 10:01:08.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.596 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.597 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.597 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.598 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:01:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.963 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.978 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:01:09 localhost nova_compute[280228]: 2025-12-05 10:01:09.978 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:01:10 localhost nova_compute[280228]: 2025-12-05 10:01:10.762 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 5 05:01:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/3219937789' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 5 05:01:10 localhost nova_compute[280228]: 2025-12-05 10:01:10.975 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.527 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.527 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.527 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:01:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:01:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3194464759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:01:11 localhost nova_compute[280228]: 2025-12-05 10:01:11.993 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.062 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.063 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.303 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.305 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11694MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.306 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.306 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.401 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.402 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.402 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.451 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:01:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:01:12 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/57977895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.859 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.867 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.886 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.888 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:01:12 localhost nova_compute[280228]: 2025-12-05 10:01:12.888 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:01:13 localhost nova_compute[280228]: 2025-12-05 10:01:13.308 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:13 localhost nova_compute[280228]: 2025-12-05 10:01:13.887 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:13 localhost nova_compute[280228]: 2025-12-05 10:01:13.887 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:13 localhost nova_compute[280228]: 2025-12-05 10:01:13.887 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:14 localhost nova_compute[280228]: 2025-12-05 10:01:14.503 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:14 localhost nova_compute[280228]: 2025-12-05 10:01:14.522 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:01:14 localhost nova_compute[280228]: 2025-12-05 10:01:14.522 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:01:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:15 localhost nova_compute[280228]: 2025-12-05 10:01:15.788 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:01:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:01:18 localhost podman[304717]: 2025-12-05 10:01:18.220707832 +0000 UTC m=+0.088620867 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:01:18 localhost podman[304717]: 2025-12-05 10:01:18.259130768 +0000 UTC m=+0.127043723 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:01:18 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:01:18 localhost nova_compute[280228]: 2025-12-05 10:01:18.310 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:18 localhost podman[304716]: 2025-12-05 10:01:18.260349755 +0000 UTC m=+0.133797601 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:01:18 localhost podman[304716]: 2025-12-05 10:01:18.34475792 +0000 UTC m=+0.218205776 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:01:18 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:01:19 localhost podman[239519]: time="2025-12-05T10:01:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:01:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:19 localhost podman[239519]: @ - - [05/Dec/2025:10:01:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:01:19 localhost podman[239519]: @ - - [05/Dec/2025:10:01:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18733 "" "Go-http-client/1.1" Dec 5 05:01:20 localhost nova_compute[280228]: 2025-12-05 10:01:20.794 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.272086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883272144, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2480, "num_deletes": 255, "total_data_size": 5699606, "memory_usage": 6053456, "flush_reason": "Manual Compaction"} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883302584, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 5197718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18537, "largest_seqno": 21016, "table_properties": {"data_size": 5186640, "index_size": 6943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27275, "raw_average_key_size": 22, "raw_value_size": 5162730, "raw_average_value_size": 4207, "num_data_blocks": 302, "num_entries": 1227, "num_filter_entries": 1227, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928790, "oldest_key_time": 1764928790, "file_creation_time": 1764928883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 30568 microseconds, and 10241 cpu microseconds. Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.302648) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 5197718 bytes OK Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.302681) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.304325) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.304348) EVENT_LOG_v1 {"time_micros": 1764928883304341, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.304373) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5688279, prev total WAL file size 5688279, number of live WAL files 2. Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.305815) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(5075KB)], [30(14MB)] Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883305923, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 20684568, "oldest_snapshot_seqno": -1} Dec 5 05:01:23 localhost nova_compute[280228]: 2025-12-05 10:01:23.356 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11710 keys, 18479791 bytes, temperature: kUnknown Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883400433, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 18479791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18412141, "index_size": 37277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 313899, "raw_average_key_size": 26, "raw_value_size": 18211747, "raw_average_value_size": 1555, "num_data_blocks": 1421, "num_entries": 11710, "num_filter_entries": 11710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764928883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.400904) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 18479791 bytes Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.402790) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.6 rd, 195.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.0, 14.8 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(7.5) write-amplify(3.6) OK, records in: 12257, records dropped: 547 output_compression: NoCompression Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.402861) EVENT_LOG_v1 {"time_micros": 1764928883402832, "job": 16, "event": "compaction_finished", "compaction_time_micros": 94629, "compaction_time_cpu_micros": 45362, "output_level": 6, "num_output_files": 1, "total_output_size": 18479791, "num_input_records": 12257, "num_output_records": 11710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883403963, "job": 16, "event": "table_file_deletion", "file_number": 32} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764928883406393, "job": 16, "event": "table_file_deletion", "file_number": 30} Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.305699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.406458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.406465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.406468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.406471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:01:23 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:01:23.406474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:01:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) Dec 5 05:01:25 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1235296873' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch Dec 5 05:01:25 localhost nova_compute[280228]: 2025-12-05 10:01:25.821 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:27 localhost openstack_network_exporter[241668]: ERROR 10:01:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:01:27 localhost openstack_network_exporter[241668]: ERROR 10:01:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:01:27 localhost openstack_network_exporter[241668]: ERROR 10:01:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:01:27 localhost openstack_network_exporter[241668]: ERROR 10:01:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:01:27 localhost openstack_network_exporter[241668]: Dec 5 05:01:27 localhost openstack_network_exporter[241668]: ERROR 10:01:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:01:27 localhost openstack_network_exporter[241668]: Dec 5 05:01:28 localhost nova_compute[280228]: 2025-12-05 10:01:28.389 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:01:29 localhost podman[304753]: 2025-12-05 10:01:29.210945901 +0000 UTC m=+0.093656961 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:01:29 localhost podman[304753]: 2025-12-05 10:01:29.243676221 +0000 UTC m=+0.126387281 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:01:29 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:01:29 localhost systemd[1]: tmp-crun.aS13PP.mount: Deactivated successfully. Dec 5 05:01:29 localhost podman[304754]: 2025-12-05 10:01:29.33240813 +0000 UTC m=+0.207373601 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 5 05:01:29 localhost podman[304754]: 2025-12-05 10:01:29.36869917 +0000 UTC m=+0.243664631 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:01:29 localhost podman[304755]: 2025-12-05 10:01:29.383003632 +0000 UTC m=+0.252545006 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 05:01:29 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:01:29 localhost podman[304755]: 2025-12-05 10:01:29.4007849 +0000 UTC m=+0.270326264 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:01:29 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:01:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:30 localhost nova_compute[280228]: 2025-12-05 10:01:30.861 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:33 localhost nova_compute[280228]: 2025-12-05 10:01:33.391 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:35 localhost nova_compute[280228]: 2025-12-05 10:01:35.894 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:01:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:01:37 localhost podman[304834]: 2025-12-05 10:01:37.062418564 +0000 UTC m=+0.084607582 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:01:37 localhost systemd[1]: tmp-crun.6zikDI.mount: Deactivated successfully. Dec 5 05:01:37 localhost podman[304834]: 2025-12-05 10:01:37.143679662 +0000 UTC m=+0.165868690 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:01:37 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:01:37 localhost podman[304835]: 2025-12-05 10:01:37.148173111 +0000 UTC m=+0.166591683 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:01:37 localhost podman[304835]: 2025-12-05 10:01:37.231735859 +0000 UTC m=+0.250154431 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:01:37 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:01:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:01:37 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:01:38 localhost ceph-mon[292820]: from='mgr.26612 172.18.0.108:0/3406697207' entity='mgr.np0005546421.sukfea' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:01:38 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:01:38 localhost nova_compute[280228]: 2025-12-05 10:01:38.426 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:01:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:01:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:40 localhost ceph-mon[292820]: from='mgr.26612 ' entity='mgr.np0005546421.sukfea' Dec 5 05:01:40 localhost nova_compute[280228]: 2025-12-05 10:01:40.932 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:43 localhost nova_compute[280228]: 2025-12-05 10:01:43.477 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mgr fail"} v 0) Dec 5 05:01:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/2842962018' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 05:01:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e90 do_prune osdmap full prune enabled Dec 5 05:01:44 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Activating manager daemon np0005546419.zhsnqq Dec 5 05:01:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 e91: 6 total, 6 up, 6 in Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e91: 6 total, 6 up, 6 in Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr handle_mgr_map Activating! Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/2842962018' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr handle_mgr_map I am now activating Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e43: np0005546419.zhsnqq(active, starting, since 0.0322325s), standbys: np0005546420.aoeylc Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546419"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546419"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546420"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546420"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005546421"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata", "id": "np0005546421"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546419.rweotn"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).mds e16 all = 0 Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546421.tuudjq"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).mds e16 all = 0 Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata", "who": "mds.np0005546420.eqhasr"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).mds e16 all = 0 Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546419.zhsnqq", "id": "np0005546419.zhsnqq"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546420.aoeylc", "id": "np0005546420.aoeylc"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mds metadata"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mds metadata"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).mds e16 all = 1 Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd metadata"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd metadata"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon metadata"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mon metadata"} : dispatch Dec 5 05:01:45 localhost ceph-mgr[286454]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: balancer Dec 5 05:01:45 localhost ceph-mgr[286454]: [balancer INFO root] Starting Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Manager daemon np0005546419.zhsnqq is now available Dec 5 05:01:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:01:45 Dec 5 05:01:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:01:45 localhost ceph-mgr[286454]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Dec 5 05:01:45 localhost ceph-mgr[286454]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost systemd[1]: session-71.scope: Deactivated successfully. Dec 5 05:01:45 localhost systemd[1]: session-71.scope: Consumed 10.937s CPU time. Dec 5 05:01:45 localhost systemd-logind[760]: Session 71 logged out. Waiting for processes to exit. Dec 5 05:01:45 localhost systemd-logind[760]: Removed session 71. Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: cephadm Dec 5 05:01:45 localhost ceph-mgr[286454]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: crash Dec 5 05:01:45 localhost ceph-mgr[286454]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: devicehealth Dec 5 05:01:45 localhost ceph-mgr[286454]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: iostat Dec 5 05:01:45 localhost ceph-mgr[286454]: [devicehealth INFO root] Starting Dec 5 05:01:45 localhost ceph-mgr[286454]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: nfs Dec 5 05:01:45 localhost ceph-mgr[286454]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: orchestrator Dec 5 05:01:45 localhost ceph-mgr[286454]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: pg_autoscaler Dec 5 05:01:45 localhost ceph-mgr[286454]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: progress Dec 5 05:01:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:01:45 localhost ceph-mgr[286454]: [progress INFO root] Loading... Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Dec 5 05:01:45 localhost ceph-mgr[286454]: [progress INFO root] Loaded OSDMap, ready. Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] recovery thread starting Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] starting setup Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: rbd_support Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch Dec 5 05:01:45 localhost ceph-mgr[286454]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: restful Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: status Dec 5 05:01:45 localhost ceph-mgr[286454]: [restful INFO root] server_addr: :: server_port: 8003 Dec 5 05:01:45 localhost ceph-mgr[286454]: [restful WARNING root] server not running: no certificate configured Dec 5 05:01:45 localhost ceph-mgr[286454]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: telemetry Dec 5 05:01:45 localhost ceph-mgr[286454]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:01:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:01:45 localhost ceph-mgr[286454]: mgr load Constructed class from module: volumes Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] PerfHandler: starting Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.193+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.193+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.193+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.193+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.193+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: vms, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.198+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.198+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.198+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.198+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:01:45.198+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: volumes, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: images, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_task_task: backups, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TaskHandler: starting Dec 5 05:01:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} v 0) Dec 5 05:01:45 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Dec 5 05:01:45 localhost ceph-mgr[286454]: [rbd_support INFO root] setup complete Dec 5 05:01:45 localhost sshd[305090]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:01:45 localhost systemd-logind[760]: New session 72 of user ceph-admin. Dec 5 05:01:45 localhost systemd[1]: Started Session 72 of User ceph-admin. Dec 5 05:01:45 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/2842962018' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: Activating manager daemon np0005546419.zhsnqq Dec 5 05:01:45 localhost ceph-mon[292820]: from='client.? 172.18.0.200:0/2842962018' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 5 05:01:45 localhost ceph-mon[292820]: Manager daemon np0005546419.zhsnqq is now available Dec 5 05:01:45 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/mirror_snapshot_schedule"} : dispatch Dec 5 05:01:45 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005546419.zhsnqq/trash_purge_schedule"} : dispatch Dec 5 05:01:45 localhost nova_compute[280228]: 2025-12-05 10:01:45.968 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:46 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e44: np0005546419.zhsnqq(active, since 1.07125s), standbys: np0005546420.aoeylc Dec 5 05:01:46 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:01:46 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:10:01:46] ENGINE Bus STARTING Dec 5 05:01:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:10:01:46] ENGINE Bus STARTING Dec 5 05:01:46 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:10:01:46] ENGINE Serving on https://172.18.0.106:7150 Dec 5 05:01:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:10:01:46] ENGINE Serving on https://172.18.0.106:7150 Dec 5 05:01:46 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:10:01:46] ENGINE Client ('172.18.0.106', 34070) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 05:01:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:10:01:46] ENGINE Client ('172.18.0.106', 34070) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 05:01:46 localhost systemd[1]: tmp-crun.3i1GXw.mount: Deactivated successfully. Dec 5 05:01:46 localhost podman[305213]: 2025-12-05 10:01:46.581511443 +0000 UTC m=+0.109006626 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7) Dec 5 05:01:46 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:10:01:46] ENGINE Serving on http://172.18.0.106:8765 Dec 5 05:01:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:10:01:46] ENGINE Serving on http://172.18.0.106:8765 Dec 5 05:01:46 localhost ceph-mgr[286454]: [cephadm INFO cherrypy.error] [05/Dec/2025:10:01:46] ENGINE Bus STARTED Dec 5 05:01:46 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : [05/Dec/2025:10:01:46] ENGINE Bus STARTED Dec 5 05:01:46 localhost podman[305213]: 2025-12-05 10:01:46.687785242 +0000 UTC m=+0.215280385 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_BRANCH=main, io.buildah.version=1.41.4, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 05:01:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : Cluster is now healthy Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e45: np0005546419.zhsnqq(active, since 2s), standbys: np0005546420.aoeylc Dec 5 05:01:47 localhost ceph-mon[292820]: [05/Dec/2025:10:01:46] ENGINE Bus STARTING Dec 5 05:01:47 localhost ceph-mgr[286454]: [devicehealth INFO root] Check health Dec 5 05:01:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:01:47 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: [05/Dec/2025:10:01:46] ENGINE Serving on https://172.18.0.106:7150 Dec 5 05:01:48 localhost ceph-mon[292820]: [05/Dec/2025:10:01:46] ENGINE Client ('172.18.0.106', 34070) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 5 05:01:48 localhost ceph-mon[292820]: [05/Dec/2025:10:01:46] ENGINE Serving on http://172.18.0.106:8765 Dec 5 05:01:48 localhost ceph-mon[292820]: [05/Dec/2025:10:01:46] ENGINE Bus STARTED Dec 5 05:01:48 localhost ceph-mon[292820]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 5 05:01:48 localhost ceph-mon[292820]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 5 05:01:48 localhost ceph-mon[292820]: Cluster is now healthy Dec 5 05:01:48 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:01:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:01:48 localhost systemd[1]: tmp-crun.5ZKcxU.mount: Deactivated successfully. Dec 5 05:01:48 localhost nova_compute[280228]: 2025-12-05 10:01:48.480 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:48 localhost podman[305445]: 2025-12-05 10:01:48.498158034 +0000 UTC m=+0.104937059 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:01:48 localhost podman[305445]: 2025-12-05 10:01:48.54465805 +0000 UTC m=+0.151437065 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=) Dec 5 05:01:48 localhost systemd[1]: tmp-crun.CwKMb5.mount: Deactivated successfully. Dec 5 05:01:48 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:01:48 localhost podman[305444]: 2025-12-05 10:01:48.558370363 +0000 UTC m=+0.165442428 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 5 05:01:48 localhost podman[305444]: 2025-12-05 10:01:48.573733826 +0000 UTC m=+0.180805911 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:01:48 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:01:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:01:48 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:48 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:48 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:01:49 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e46: np0005546419.zhsnqq(active, since 4s), standbys: np0005546420.aoeylc Dec 5 05:01:49 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:49 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:49 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:49 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:01:49 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:01:49 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:01:49 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:01:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:01:49 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:49 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:49 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.conf Dec 5 05:01:49 localhost podman[239519]: time="2025-12-05T10:01:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:01:49 localhost podman[239519]: @ - - [05/Dec/2025:10:01:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:01:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:49 localhost podman[239519]: @ - - [05/Dec/2025:10:01:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18732 "" "Go-http-client/1.1" Dec 5 05:01:50 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : Standby manager daemon np0005546421.sukfea started Dec 5 05:01:50 localhost ceph-mgr[286454]: mgr.server handle_open ignoring open from mgr.np0005546421.sukfea 172.18.0.108:0/1493595436; not ready for session (expect reconnect) Dec 5 05:01:50 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:50 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:50 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.conf Dec 5 05:01:50 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:50 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:01:51 localhost nova_compute[280228]: 2025-12-05 10:01:51.024 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:51 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e47: np0005546419.zhsnqq(active, since 6s), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "mgr metadata", "who": "np0005546421.sukfea", "id": "np0005546421.sukfea"} : dispatch Dec 5 05:01:51 localhost ceph-mgr[286454]: [cephadm INFO cephadm.serve] Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 5d4cf0d5-6936-4302-8612-bf818261fda2 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:01:51 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 5d4cf0d5-6936-4302-8612-bf818261fda2 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:01:51 localhost ceph-mgr[286454]: [progress INFO root] Completed event 5d4cf0d5-6936-4302-8612-bf818261fda2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:01:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:01:51 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:01:51 localhost ceph-mon[292820]: Updating np0005546420.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mon[292820]: Updating np0005546421.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mon[292820]: Updating np0005546419.localdomain:/var/lib/ceph/79feddb1-4bfc-557f-83b9-0d57c9f66c1b/config/ceph.client.admin.keyring Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:51 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:01:52 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:01:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:01:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:01:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:01:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:52 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 9e4cc059-e60d-4b50-b6c1-e1e381bcebb6 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:01:52 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 9e4cc059-e60d-4b50-b6c1-e1e381bcebb6 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:01:52 localhost ceph-mgr[286454]: [progress INFO root] Completed event 9e4cc059-e60d-4b50-b6c1-e1e381bcebb6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:01:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:01:52 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:01:52 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:01:52 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Dec 5 05:01:53 localhost nova_compute[280228]: 2025-12-05 10:01:53.488 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:01:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Dec 5 05:01:55 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:01:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:01:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:56 localhost nova_compute[280228]: 2025-12-05 10:01:56.062 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:01:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Dec 5 05:01:57 localhost openstack_network_exporter[241668]: ERROR 10:01:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:01:57 localhost openstack_network_exporter[241668]: ERROR 10:01:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:01:57 localhost openstack_network_exporter[241668]: ERROR 10:01:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:01:57 localhost openstack_network_exporter[241668]: ERROR 10:01:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:01:57 localhost openstack_network_exporter[241668]: Dec 5 05:01:57 localhost openstack_network_exporter[241668]: ERROR 10:01:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:01:57 localhost openstack_network_exporter[241668]: Dec 5 05:01:58 localhost nova_compute[280228]: 2025-12-05 10:01:58.524 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:01:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 5 05:01:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:02:00 localhost systemd[293789]: Created slice User Background Tasks Slice. Dec 5 05:02:00 localhost systemd[293789]: Starting Cleanup of User's Temporary Files and Directories... Dec 5 05:02:00 localhost podman[306196]: 2025-12-05 10:02:00.196835919 +0000 UTC m=+0.077820346 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:02:00 localhost systemd[293789]: Finished Cleanup of User's Temporary Files and Directories. Dec 5 05:02:00 localhost podman[306195]: 2025-12-05 10:02:00.261241043 +0000 UTC m=+0.139710573 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:02:00 localhost podman[306195]: 2025-12-05 10:02:00.267121803 +0000 UTC m=+0.145591333 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:02:00 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:02:00 localhost podman[306197]: 2025-12-05 10:02:00.302810077 +0000 UTC m=+0.179285115 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:02:00 localhost podman[306197]: 2025-12-05 10:02:00.314218037 +0000 UTC m=+0.190693085 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 5 05:02:00 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:02:00 localhost podman[306196]: 2025-12-05 10:02:00.329189146 +0000 UTC m=+0.210173553 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 05:02:00 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:02:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 5 05:02:01 localhost nova_compute[280228]: 2025-12-05 10:02:01.109 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 5 05:02:03 localhost nova_compute[280228]: 2025-12-05 10:02:03.557 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:02:03.909 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:02:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:02:03.910 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:02:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:02:03.911 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:02:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:02:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 4871 writes, 21K keys, 4871 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4871 writes, 629 syncs, 7.74 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 85 writes, 295 keys, 85 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 85 writes, 39 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 05:02:06 localhost nova_compute[280228]: 2025-12-05 10:02:06.153 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:02:08 localhost podman[306255]: 2025-12-05 10:02:08.207326469 +0000 UTC m=+0.088774522 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:02:08 localhost podman[306255]: 2025-12-05 10:02:08.215951684 +0000 UTC m=+0.097399737 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:02:08 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:02:08 localhost podman[306254]: 2025-12-05 10:02:08.303821276 +0000 UTC m=+0.186137765 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:02:08 localhost podman[306254]: 2025-12-05 10:02:08.358150121 +0000 UTC m=+0.240466630 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller) Dec 5 05:02:08 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:02:08 localhost nova_compute[280228]: 2025-12-05 10:02:08.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:08 localhost nova_compute[280228]: 2025-12-05 10:02:08.559 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:02:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 6101 writes, 26K keys, 6101 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 6101 writes, 948 syncs, 6.44 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 157 writes, 327 keys, 157 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s#012Interval WAL: 157 writes, 78 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 05:02:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.732 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.732 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.733 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:02:10 localhost nova_compute[280228]: 2025-12-05 10:02:10.733 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:02:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:11 localhost nova_compute[280228]: 2025-12-05 10:02:11.201 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:11 localhost nova_compute[280228]: 2025-12-05 10:02:11.498 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:02:12 localhost nova_compute[280228]: 2025-12-05 10:02:12.931 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:02:12 localhost nova_compute[280228]: 2025-12-05 10:02:12.932 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:02:12 localhost nova_compute[280228]: 2025-12-05 10:02:12.933 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:12 localhost nova_compute[280228]: 2025-12-05 10:02:12.933 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.950 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.951 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.965 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.965 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5eb92aed-1b16-42d5-9b22-36c9fe5e2cdf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:12.951803', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '78898142-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.126104894, 'message_signature': 'fcb90b737cf2030264d3747aa079f3449d5a3dab98d9f221403531feaa439b38'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:12.951803', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '78899538-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.126104894, 'message_signature': 'aa26b6eea645d50a4f9b3e97bdc8ed5408f4cf8555681dfc1716e3230ff7d707'}]}, 'timestamp': '2025-12-05 10:02:12.966414', '_unique_id': 'ef6cb5516bb14250a23599dcee758f7c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.969 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.974 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd172790f-596a-490b-97fe-85d1659e35cc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:12.969548', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '788ad948-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': '41e529b946c3f030f98340be7f90c44137e2f61d0f8f881a6a23f0e4177a4b84'}]}, 'timestamp': '2025-12-05 10:02:12.974675', '_unique_id': '32b473d7118d4445891be68be8ffacf0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.975 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.977 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.977 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '441e2176-170c-4b07-bbb1-2db459a1ee41', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:12.977200', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '788b515c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': 'ff31737b68e080b97fe422288c7c8e0fb29478dc487d063c2449f475f53caf95'}]}, 'timestamp': '2025-12-05 10:02:12.977737', '_unique_id': 'a5e2e96981444efbbd36f54610c16b28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.978 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.979 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.980 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9401ce93-c219-44c7-a610-141fc46706ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:12.980127', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '788bc2b8-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': 'acb2a49522ff8df376aba8ce69fa30177306fb63010778a6ea5828be86abe9d5'}]}, 'timestamp': '2025-12-05 10:02:12.980633', '_unique_id': '190d0590f73442fb97471656514d2fc5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.983 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.983 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.984 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05923eff-2bd1-4b9e-a223-e600c64212fd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:12.983596', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '788c50c0-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.126104894, 'message_signature': '56aa87e7ad4488223d12ace8bf1f2277cce6fd96b27e285e01e04d2e1677804a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:12.983596', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '788c6704-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.126104894, 'message_signature': '808e16d188303aed1699e6553746550043df19fb6c12d65467af1e5ff02715b2'}]}, 'timestamp': '2025-12-05 10:02:12.984821', '_unique_id': '7442ead7e2b84bfc83f46db417b33822'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.985 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:12.987 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.012 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6be67d5e-6812-422f-81dc-7178e262b22c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:12.987869', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7890be6c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': '682732cacf89d96389de770d7333dd4e444bcf335d5d15322b12cac60f3ef474'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:12.987869', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7890dc26-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': 'ffdb902e953fefb60e53c1c900edbc1b81c286380b4930f0c8ec6232c5fdb051'}]}, 'timestamp': '2025-12-05 10:02:13.014148', '_unique_id': '99ad25ffee194f9bb9668d40662d18e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.017 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '02d0f6c8-225d-4b5b-ad04-d7edac4445da', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.017970', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '78918e6e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': 'dad7ad3985a6729cd400297c5e9bedfe5e3436a1d1ca2e7a5d891fafa52d0066'}]}, 'timestamp': '2025-12-05 10:02:13.018636', '_unique_id': '8b2476aecc36481782f025cf0ef2813e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.021 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.021 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:02:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2c726b68-52d4-419b-8a35-c52c1f47385d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:02:13.021894', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '789477aa-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.210794079, 'message_signature': '84f2120a4e1b37ca3020d23862de99122e7b99775281c6217ba2f410d6c5ca8e'}]}, 'timestamp': '2025-12-05 10:02:13.037841', '_unique_id': '8a7cb196e0114e2989c9770d6ecfd57a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.041 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.042 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.042 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '46d8b70c-ad6e-48f8-ac46-abf48a9e64ef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.042203', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '789541bc-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': '0597ad637629fc08dd7a7784cd1c0220cf21623c36321b10231aa1472d94363f'}]}, 'timestamp': '2025-12-05 10:02:13.042912', '_unique_id': '0a7b913a1ead4046a65d9cfe8643e81b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.046 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.046 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '171619a3-3b3b-40f4-83e4-495a6793643f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:13.046738', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7895efe0-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': '9dec38fef5ce75b73f2cf79d9612d5d8f22dc635ab50694aea4f7d507fd916ae'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:13.046738', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '78960a70-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': '02b419e5dae92dee3683ef10a502ef3c77da51c210239ae6230900c71b036aec'}]}, 'timestamp': '2025-12-05 10:02:13.048177', '_unique_id': '7f282135faae4c5bb925e5bd4ed010d2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.051 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b900af10-21c9-4573-967f-7186fcd092d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.052360', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '7896d234-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': '8abc5e91700469840b3fab912b46e35b4c1d4c12827e8267fdd8db7cfd882642'}]}, 'timestamp': '2025-12-05 10:02:13.053425', '_unique_id': 'cf114943ed424451a23b42ae37cdacb4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51566df8-b914-45b2-b7a0-72f33d5abc2a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:13.057265', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '789789e0-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': 'b762eff996e49dcf33bb2e1dbc85d64db059bff36092372f75595d18bbb703d7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:13.057265', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '78979bb0-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': 'adc702ffa97a88d0f2a833703085494842c84a4a7dca3cfef0eb2028acaf238c'}]}, 'timestamp': '2025-12-05 10:02:13.058325', '_unique_id': '950e814eec9c4aeda65029322b1a1afc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.061 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.061 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '898cf609-69d9-482d-b87c-08ffbcf0e3a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:13.061714', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '78983912-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': '9b017a4c4d3af0dc14a3ac85d6adca6fa2e761f75b6b0d15ec886e82b750a845'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:13.061714', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7898547e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': 'd7a93f1d769c146403004fd9c0b6602356ebf07eb2d8e044bdea31c017f5337e'}]}, 'timestamp': '2025-12-05 10:02:13.063111', '_unique_id': 'd30b4e159d924f17a6a93643d22f6201'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43bd6af4-c269-4d74-b6a2-3de5f20af10b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:13.066661', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7898fa8c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': 'ffd2c86d61adafc86b0f1bd619ef0e1695e30ad4859304b7aaa401311d2f9bd2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:13.066661', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '789916ca-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': 'c466018c0b529f3d7b2ac298ecbaa0eaa6c60cba5444e026e1efd1bc0409111e'}]}, 'timestamp': '2025-12-05 10:02:13.068084', '_unique_id': '20b5a30773574532a79aff615a161d1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.072 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2e246299-ecf5-4fbd-a441-9aa5ff9f3840', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:13.072799', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7899e8e8-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.126104894, 'message_signature': '1cc77e6b9e8b7489d63513df23a3e93af940203f3fd9c6af0290454b18bc98d3'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:13.072799', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '789a060c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.126104894, 'message_signature': 'd20094e0e5f22dfdd425d58d1e95fec8caa8704e19f71cb32f130ecaa92ba984'}]}, 'timestamp': '2025-12-05 10:02:13.074112', '_unique_id': '4c17ff83dd5d41fa9c05695b3225bda6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.075 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.077 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.077 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.078 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd9b9058d-703b-4346-9056-631629d64069', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.078086', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '789ab944-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': 'ccd7c55fa5bd65ae3c306b14cc9f3a39fb38d6d6229c7ca3bf4a392c787b55a8'}]}, 'timestamp': '2025-12-05 10:02:13.078726', '_unique_id': '734368d28f704989a7f9c6270419c48c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.081 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.081 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e1f84e6b-07b6-4f82-8f1d-980893126a7c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.081758', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '789b476a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': '23e6e7598fa926c17dccc030decae2316d2d3a2f52f8c366e8503a0ae80aca11'}]}, 'timestamp': '2025-12-05 10:02:13.082396', '_unique_id': 'd9a7c59c1d2d449489542364bcc98133'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.083 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.085 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.085 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9a281b5-a69f-4667-8a88-2611686bfbff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.085505', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '789bd93c-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': 'f0a9c2cf2dfe20f3b7d865dfae70c1f2920fa388711ba52a2bce63eff4ecce84'}]}, 'timestamp': '2025-12-05 10:02:13.086070', '_unique_id': 'a2609ba905da4695b71a7cb4fcb4dd88'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.087 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.088 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.088 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 14350000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7cd6a082-15ca-494b-aa54-57c3db1b95d9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14350000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:02:13.088811', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '789c59fc-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.210794079, 'message_signature': 'ceec31ff476f92f930135dc70ee889f4aaf186f30561ccba5ee0629a9eac84d5'}]}, 'timestamp': '2025-12-05 10:02:13.089412', '_unique_id': '474254becbdc4520ba6164508a555d4a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.090 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.092 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.092 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.092 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5f854cd3-ccd3-4f9e-90c0-c04d23664258', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:02:13.092265', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '789ce282-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': '4c92103f507468b812e83153f62571c7b7aa23d4071d79de39bcce639557c1d3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:02:13.092265', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '789cf5f6-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.16220192, 'message_signature': '96dd2289fe41ec5ffbbee5b349c83c60236b316a1f1295ee6c99e261965d9f1e'}]}, 'timestamp': '2025-12-05 10:02:13.093398', '_unique_id': 'c0d13f5931474dedaece4c2e43311136'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.096 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.096 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c4a5a4fa-ff2d-4f00-bb21-09d7a8b5039c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:02:13.096250', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '789d7dbe-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12048.143859299, 'message_signature': 'c2895c0d84c377870a99c14fb7f760c614e7a49538bba399d36e3e7d6b7dbc6e'}]}, 'timestamp': '2025-12-05 10:02:13.096837', '_unique_id': '965b778763ee47e7b3ce2d4a90409241'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:02:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:02:13.097 12 ERROR oslo_messaging.notify.messaging Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.185 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.186 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.187 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.187 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.188 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.563 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:02:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3450334063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.605 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.676 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.677 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.897 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.899 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11673MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.899 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.900 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.991 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.992 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:02:13 localhost nova_compute[280228]: 2025-12-05 10:02:13.992 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:02:14 localhost nova_compute[280228]: 2025-12-05 10:02:14.041 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:02:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:02:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1288089841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:02:14 localhost nova_compute[280228]: 2025-12-05 10:02:14.521 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:02:14 localhost nova_compute[280228]: 2025-12-05 10:02:14.527 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:02:14 localhost nova_compute[280228]: 2025-12-05 10:02:14.547 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:02:14 localhost nova_compute[280228]: 2025-12-05 10:02:14.549 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:02:14 localhost nova_compute[280228]: 2025-12-05 10:02:14.550 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:02:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:15 localhost nova_compute[280228]: 2025-12-05 10:02:15.123 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:15 localhost nova_compute[280228]: 2025-12-05 10:02:15.124 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:15 localhost nova_compute[280228]: 2025-12-05 10:02:15.124 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:15 localhost nova_compute[280228]: 2025-12-05 10:02:15.124 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:15 localhost nova_compute[280228]: 2025-12-05 10:02:15.124 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:02:15 localhost nova_compute[280228]: 2025-12-05 10:02:15.125 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:02:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:02:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:02:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:02:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:02:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:02:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:02:16 localhost nova_compute[280228]: 2025-12-05 10:02:16.251 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:18 localhost nova_compute[280228]: 2025-12-05 10:02:18.564 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:02:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:02:19 localhost systemd[1]: tmp-crun.I8efxi.mount: Deactivated successfully. Dec 5 05:02:19 localhost podman[306347]: 2025-12-05 10:02:19.207964352 +0000 UTC m=+0.089188054 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6) Dec 5 05:02:19 localhost podman[306347]: 2025-12-05 10:02:19.245151162 +0000 UTC m=+0.126374864 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350) Dec 5 05:02:19 localhost systemd[1]: tmp-crun.qHzwgf.mount: Deactivated successfully. Dec 5 05:02:19 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:02:19 localhost podman[306346]: 2025-12-05 10:02:19.259795811 +0000 UTC m=+0.139354632 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:02:19 localhost podman[306346]: 2025-12-05 10:02:19.268385264 +0000 UTC m=+0.147944085 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:02:19 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:02:19 localhost podman[239519]: time="2025-12-05T10:02:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:02:19 localhost podman[239519]: @ - - [05/Dec/2025:10:02:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:02:19 localhost podman[239519]: @ - - [05/Dec/2025:10:02:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18724 "" "Go-http-client/1.1" Dec 5 05:02:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:21 localhost nova_compute[280228]: 2025-12-05 10:02:21.294 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:23 localhost nova_compute[280228]: 2025-12-05 10:02:23.567 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:26 localhost nova_compute[280228]: 2025-12-05 10:02:26.323 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:27 localhost openstack_network_exporter[241668]: ERROR 10:02:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:02:27 localhost openstack_network_exporter[241668]: ERROR 10:02:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:02:27 localhost openstack_network_exporter[241668]: ERROR 10:02:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:02:27 localhost openstack_network_exporter[241668]: ERROR 10:02:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:02:27 localhost openstack_network_exporter[241668]: Dec 5 05:02:27 localhost openstack_network_exporter[241668]: ERROR 10:02:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:02:27 localhost openstack_network_exporter[241668]: Dec 5 05:02:28 localhost nova_compute[280228]: 2025-12-05 10:02:28.570 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:02:31 localhost podman[306385]: 2025-12-05 10:02:31.190851571 +0000 UTC m=+0.080775547 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:02:31 localhost podman[306385]: 2025-12-05 10:02:31.201216169 +0000 UTC m=+0.091140175 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:02:31 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:02:31 localhost podman[306392]: 2025-12-05 10:02:31.25441302 +0000 UTC m=+0.133068880 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:02:31 localhost podman[306392]: 2025-12-05 10:02:31.291917329 +0000 UTC m=+0.170573209 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 05:02:31 localhost podman[306386]: 2025-12-05 10:02:31.303196294 +0000 UTC m=+0.183456764 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:02:31 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:02:31 localhost podman[306386]: 2025-12-05 10:02:31.331871523 +0000 UTC m=+0.212131983 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:02:31 localhost nova_compute[280228]: 2025-12-05 10:02:31.359 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:31 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:02:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:33 localhost nova_compute[280228]: 2025-12-05 10:02:33.573 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:36 localhost nova_compute[280228]: 2025-12-05 10:02:36.363 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:38 localhost nova_compute[280228]: 2025-12-05 10:02:38.577 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:02:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:02:39 localhost podman[306444]: 2025-12-05 10:02:39.198471373 +0000 UTC m=+0.083791860 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:02:39 localhost systemd[1]: tmp-crun.8gFzpN.mount: Deactivated successfully. Dec 5 05:02:39 localhost podman[306445]: 2025-12-05 10:02:39.255328456 +0000 UTC m=+0.133888885 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:02:39 localhost podman[306445]: 2025-12-05 10:02:39.270641355 +0000 UTC m=+0.149201794 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:02:39 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:02:39 localhost podman[306444]: 2025-12-05 10:02:39.306919787 +0000 UTC m=+0.192240284 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:02:39 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:02:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:41 localhost nova_compute[280228]: 2025-12-05 10:02:41.367 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:43 localhost nova_compute[280228]: 2025-12-05 10:02:43.580 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:02:45 Dec 5 05:02:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:02:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:02:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['.mgr', 'backups', 'manila_metadata', 'images', 'manila_data', 'vms', 'volumes'] Dec 5 05:02:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:02:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Dec 5 05:02:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:02:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:02:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:02:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:02:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:02:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:02:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:02:46 localhost nova_compute[280228]: 2025-12-05 10:02:46.370 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:48 localhost nova_compute[280228]: 2025-12-05 10:02:48.585 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:49 localhost podman[239519]: time="2025-12-05T10:02:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:02:49 localhost podman[239519]: @ - - [05/Dec/2025:10:02:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:02:49 localhost podman[239519]: @ - - [05/Dec/2025:10:02:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18731 "" "Go-http-client/1.1" Dec 5 05:02:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:02:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:02:50 localhost podman[306492]: 2025-12-05 10:02:50.2083548 +0000 UTC m=+0.089911866 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7) Dec 5 05:02:50 localhost podman[306492]: 2025-12-05 10:02:50.220365979 +0000 UTC m=+0.101923045 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 05:02:50 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:02:50 localhost podman[306491]: 2025-12-05 10:02:50.299637988 +0000 UTC m=+0.185245078 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 5 05:02:50 localhost podman[306491]: 2025-12-05 10:02:50.313738171 +0000 UTC m=+0.199345301 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Dec 5 05:02:50 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:02:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:51 localhost nova_compute[280228]: 2025-12-05 10:02:51.372 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:02:53 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:02:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:02:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:02:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:02:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:02:53 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 9e5e03c4-b35a-4e8f-87cf-87a25d522e6d (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:02:53 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 9e5e03c4-b35a-4e8f-87cf-87a25d522e6d (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:02:53 localhost ceph-mgr[286454]: [progress INFO root] Completed event 9e5e03c4-b35a-4e8f-87cf-87a25d522e6d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:02:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:02:53 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:02:53 localhost nova_compute[280228]: 2025-12-05 10:02:53.589 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:54 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:02:54 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:02:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:02:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:55 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:02:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:02:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:02:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:02:56 localhost nova_compute[280228]: 2025-12-05 10:02:56.429 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:57 localhost openstack_network_exporter[241668]: ERROR 10:02:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:02:57 localhost openstack_network_exporter[241668]: ERROR 10:02:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:02:57 localhost openstack_network_exporter[241668]: ERROR 10:02:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:02:57 localhost openstack_network_exporter[241668]: ERROR 10:02:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:02:57 localhost openstack_network_exporter[241668]: Dec 5 05:02:57 localhost openstack_network_exporter[241668]: ERROR 10:02:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:02:57 localhost openstack_network_exporter[241668]: Dec 5 05:02:58 localhost nova_compute[280228]: 2025-12-05 10:02:58.592 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:02:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:02:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:01 localhost nova_compute[280228]: 2025-12-05 10:03:01.434 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:03:02 localhost systemd[1]: tmp-crun.4BLLwA.mount: Deactivated successfully. Dec 5 05:03:02 localhost podman[306616]: 2025-12-05 10:03:02.215264466 +0000 UTC m=+0.089903937 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:03:02 localhost podman[306617]: 2025-12-05 10:03:02.230299617 +0000 UTC m=+0.105576207 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 5 05:03:02 localhost podman[306616]: 2025-12-05 10:03:02.254608363 +0000 UTC m=+0.129247824 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:03:02 localhost podman[306617]: 2025-12-05 10:03:02.262649298 +0000 UTC m=+0.137925828 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:03:02 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:03:02 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:03:02 localhost podman[306618]: 2025-12-05 10:03:02.323788342 +0000 UTC m=+0.197329828 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Dec 5 05:03:02 localhost podman[306618]: 2025-12-05 10:03:02.338598127 +0000 UTC m=+0.212139623 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:03:02 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:03:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:03:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2953737703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:03:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:03:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2953737703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:03:03 localhost nova_compute[280228]: 2025-12-05 10:03:03.642 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:03.910 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:03:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:03.910 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:03:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:03.911 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:03:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:06 localhost nova_compute[280228]: 2025-12-05 10:03:06.436 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:08 localhost nova_compute[280228]: 2025-12-05 10:03:08.645 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:09 localhost nova_compute[280228]: 2025-12-05 10:03:09.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:03:10 localhost podman[306679]: 2025-12-05 10:03:10.196323514 +0000 UTC m=+0.082211430 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 05:03:10 localhost podman[306679]: 2025-12-05 10:03:10.235409802 +0000 UTC m=+0.121297728 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 5 05:03:10 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:03:10 localhost podman[306680]: 2025-12-05 10:03:10.312918248 +0000 UTC m=+0.195870494 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:03:10 localhost podman[306680]: 2025-12-05 10:03:10.321616874 +0000 UTC m=+0.204569180 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:03:10 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.781 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.781 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.781 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:03:10 localhost nova_compute[280228]: 2025-12-05 10:03:10.782 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:03:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:11 localhost nova_compute[280228]: 2025-12-05 10:03:11.151 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:03:11 localhost nova_compute[280228]: 2025-12-05 10:03:11.169 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:03:11 localhost nova_compute[280228]: 2025-12-05 10:03:11.169 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:03:11 localhost nova_compute[280228]: 2025-12-05 10:03:11.439 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.531 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.531 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:03:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:03:12 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1628058856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:03:12 localhost nova_compute[280228]: 2025-12-05 10:03:12.988 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.052 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.052 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:03:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.469 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.470 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11678MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.470 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.471 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.538 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.539 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.539 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.568 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.649 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:03:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3548757848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:03:13 localhost nova_compute[280228]: 2025-12-05 10:03:13.994 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:03:14 localhost nova_compute[280228]: 2025-12-05 10:03:14.001 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:03:14 localhost nova_compute[280228]: 2025-12-05 10:03:14.029 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:03:14 localhost nova_compute[280228]: 2025-12-05 10:03:14.032 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:03:14 localhost nova_compute[280228]: 2025-12-05 10:03:14.033 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:03:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.029 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.030 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.031 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.031 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.032 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.032 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:03:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 5 05:03:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:03:15 localhost nova_compute[280228]: 2025-12-05 10:03:15.504 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:16 localhost nova_compute[280228]: 2025-12-05 10:03:16.467 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:16 localhost nova_compute[280228]: 2025-12-05 10:03:16.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:03:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 5 05:03:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e48: np0005546419.zhsnqq(active, since 92s), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:03:18 localhost nova_compute[280228]: 2025-12-05 10:03:18.650 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 5 05:03:19 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:19.765 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:03:19 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:19.766 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:03:19 localhost nova_compute[280228]: 2025-12-05 10:03:19.809 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:19 localhost podman[239519]: time="2025-12-05T10:03:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:03:19 localhost podman[239519]: @ - - [05/Dec/2025:10:03:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154285 "" "Go-http-client/1.1" Dec 5 05:03:19 localhost podman[239519]: @ - - [05/Dec/2025:10:03:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18733 "" "Go-http-client/1.1" Dec 5 05:03:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 5 05:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:03:21 localhost podman[306772]: 2025-12-05 10:03:21.20621438 +0000 UTC m=+0.089524374 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter) Dec 5 05:03:21 localhost podman[306771]: 2025-12-05 10:03:21.247792054 +0000 UTC m=+0.135216895 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:03:21 localhost podman[306771]: 2025-12-05 10:03:21.262574188 +0000 UTC m=+0.149999029 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:03:21 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:03:21 localhost podman[306772]: 2025-12-05 10:03:21.273671838 +0000 UTC m=+0.156981822 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, vcs-type=git) Dec 5 05:03:21 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:03:21 localhost nova_compute[280228]: 2025-12-05 10:03:21.470 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:22.757 261902 INFO oslo.privsep.daemon [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpsf4_i9y1/privsep.sock']#033[00m Dec 5 05:03:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 5 05:03:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:23.374 261902 INFO oslo.privsep.daemon [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 5 05:03:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:23.247 306814 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 05:03:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:23.252 306814 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 05:03:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:23.256 306814 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 5 05:03:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:23.256 306814 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306814#033[00m Dec 5 05:03:23 localhost nova_compute[280228]: 2025-12-05 10:03:23.652 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:23.959 261902 INFO oslo.privsep.daemon [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmplevly6cz/privsep.sock']#033[00m Dec 5 05:03:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:24.615 261902 INFO oslo.privsep.daemon [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 5 05:03:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:24.479 306823 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 05:03:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:24.487 306823 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 05:03:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:24.491 306823 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 5 05:03:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:24.492 306823 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306823#033[00m Dec 5 05:03:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 105 MiB data, 548 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Dec 5 05:03:25 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:25.578 261902 INFO oslo.privsep.daemon [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp8dj4cerm/privsep.sock']#033[00m Dec 5 05:03:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e91 do_prune osdmap full prune enabled Dec 5 05:03:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e92 e92: 6 total, 6 up, 6 in Dec 5 05:03:26 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in Dec 5 05:03:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:26.235 261902 INFO oslo.privsep.daemon [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 5 05:03:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:26.125 306835 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 5 05:03:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:26.130 306835 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 5 05:03:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:26.133 306835 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 5 05:03:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:26.134 306835 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306835#033[00m Dec 5 05:03:26 localhost nova_compute[280228]: 2025-12-05 10:03:26.474 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 125 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s Dec 5 05:03:27 localhost openstack_network_exporter[241668]: ERROR 10:03:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:03:27 localhost openstack_network_exporter[241668]: ERROR 10:03:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:03:27 localhost openstack_network_exporter[241668]: ERROR 10:03:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:03:27 localhost openstack_network_exporter[241668]: ERROR 10:03:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:03:27 localhost openstack_network_exporter[241668]: Dec 5 05:03:27 localhost openstack_network_exporter[241668]: ERROR 10:03:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:03:27 localhost openstack_network_exporter[241668]: Dec 5 05:03:27 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:27.663 261902 INFO neutron.agent.linux.ip_lib [None req-c7c3e408-57e8-410d-8353-4db95059883f - - - - - -] Device tap9fb6f257-c1 cannot be used as it has no MAC address#033[00m Dec 5 05:03:27 localhost nova_compute[280228]: 2025-12-05 10:03:27.733 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:27 localhost kernel: device tap9fb6f257-c1 entered promiscuous mode Dec 5 05:03:27 localhost NetworkManager[5960]: [1764929007.7426] manager: (tap9fb6f257-c1): new Generic device (/org/freedesktop/NetworkManager/Devices/18) Dec 5 05:03:27 localhost nova_compute[280228]: 2025-12-05 10:03:27.743 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:27 localhost ovn_controller[153000]: 2025-12-05T10:03:27Z|00072|binding|INFO|Claiming lport 9fb6f257-c12d-4081-b335-2e3afe808523 for this chassis. Dec 5 05:03:27 localhost ovn_controller[153000]: 2025-12-05T10:03:27Z|00073|binding|INFO|9fb6f257-c12d-4081-b335-2e3afe808523: Claiming unknown Dec 5 05:03:27 localhost systemd-udevd[306850]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:03:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:27.755 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-803975a5-41d4-43a5-b50f-59f1dee42255', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-803975a5-41d4-43a5-b50f-59f1dee42255', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b38a1d46818e4f37b442152341646ff0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9ca8455-2a4d-496a-b40b-16723165cd1a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9fb6f257-c12d-4081-b335-2e3afe808523) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:03:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:27.758 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb6f257-c12d-4081-b335-2e3afe808523 in datapath 803975a5-41d4-43a5-b50f-59f1dee42255 bound to our chassis#033[00m Dec 5 05:03:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:27.764 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port f6612077-5d50-4ca0-8e72-f25acfe95a34 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:03:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:27.764 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 803975a5-41d4-43a5-b50f-59f1dee42255, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:03:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:27.767 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aeb1ce-90fc-4ffe-9435-0a69b843aa7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:03:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:03:27.769 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:03:27 localhost ovn_controller[153000]: 2025-12-05T10:03:27Z|00074|binding|INFO|Setting lport 9fb6f257-c12d-4081-b335-2e3afe808523 ovn-installed in OVS Dec 5 05:03:27 localhost ovn_controller[153000]: 2025-12-05T10:03:27Z|00075|binding|INFO|Setting lport 9fb6f257-c12d-4081-b335-2e3afe808523 up in Southbound Dec 5 05:03:27 localhost nova_compute[280228]: 2025-12-05 10:03:27.796 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:27 localhost nova_compute[280228]: 2025-12-05 10:03:27.844 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:27 localhost nova_compute[280228]: 2025-12-05 10:03:27.876 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e92 do_prune osdmap full prune enabled Dec 5 05:03:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 e93: 6 total, 6 up, 6 in Dec 5 05:03:28 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e93: 6 total, 6 up, 6 in Dec 5 05:03:28 localhost nova_compute[280228]: 2025-12-05 10:03:28.659 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:28 localhost podman[306906]: Dec 5 05:03:28 localhost podman[306906]: 2025-12-05 10:03:28.858358427 +0000 UTC m=+0.087854374 container create 60e38d4d836f214fef1fadfa5b9b23c358cdaea9bc47f82b8bae371f3ed3624d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-803975a5-41d4-43a5-b50f-59f1dee42255, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:03:28 localhost systemd[1]: Started libpod-conmon-60e38d4d836f214fef1fadfa5b9b23c358cdaea9bc47f82b8bae371f3ed3624d.scope. Dec 5 05:03:28 localhost podman[306906]: 2025-12-05 10:03:28.810656375 +0000 UTC m=+0.040152392 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:03:28 localhost systemd[1]: Started libcrun container. Dec 5 05:03:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960d95b56eedb0a4128290331053936f332ba03290d07fb82c6aa13bd9c73a81/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:03:28 localhost podman[306906]: 2025-12-05 10:03:28.943113884 +0000 UTC m=+0.172609821 container init 60e38d4d836f214fef1fadfa5b9b23c358cdaea9bc47f82b8bae371f3ed3624d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-803975a5-41d4-43a5-b50f-59f1dee42255, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:03:28 localhost podman[306906]: 2025-12-05 10:03:28.958346301 +0000 UTC m=+0.187842238 container start 60e38d4d836f214fef1fadfa5b9b23c358cdaea9bc47f82b8bae371f3ed3624d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-803975a5-41d4-43a5-b50f-59f1dee42255, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:03:28 localhost dnsmasq[306925]: started, version 2.85 cachesize 150 Dec 5 05:03:28 localhost dnsmasq[306925]: DNS service limited to local subnets Dec 5 05:03:28 localhost dnsmasq[306925]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:03:28 localhost dnsmasq[306925]: warning: no upstream servers configured Dec 5 05:03:28 localhost dnsmasq-dhcp[306925]: DHCP, static leases only on 192.168.199.0, lease time 1d Dec 5 05:03:28 localhost dnsmasq[306925]: read /var/lib/neutron/dhcp/803975a5-41d4-43a5-b50f-59f1dee42255/addn_hosts - 0 addresses Dec 5 05:03:28 localhost dnsmasq-dhcp[306925]: read /var/lib/neutron/dhcp/803975a5-41d4-43a5-b50f-59f1dee42255/host Dec 5 05:03:28 localhost dnsmasq-dhcp[306925]: read /var/lib/neutron/dhcp/803975a5-41d4-43a5-b50f-59f1dee42255/opts Dec 5 05:03:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 125 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 2.6 MiB/s wr, 19 op/s Dec 5 05:03:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:03:29.509 261902 INFO neutron.agent.dhcp.agent [None req-4ca0e050-58e2-4e18-82f3-35aed8e35599 - - - - - -] DHCP configuration for ports {'38c683f7-8b4e-4d7f-aa76-d0a8b308d2c8'} is completed#033[00m Dec 5 05:03:29 localhost systemd[1]: tmp-crun.Z2zSN6.mount: Deactivated successfully. Dec 5 05:03:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 125 MiB data, 609 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 35 op/s Dec 5 05:03:31 localhost nova_compute[280228]: 2025-12-05 10:03:31.507 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s Dec 5 05:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:03:33 localhost podman[306927]: 2025-12-05 10:03:33.256662228 +0000 UTC m=+0.137738772 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:03:33 localhost podman[306926]: 2025-12-05 10:03:33.228277418 +0000 UTC m=+0.112443297 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:03:33 localhost podman[306927]: 2025-12-05 10:03:33.299702408 +0000 UTC m=+0.180778952 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 05:03:33 localhost podman[306926]: 2025-12-05 10:03:33.314776289 +0000 UTC m=+0.198942188 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:03:33 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:03:33 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:03:33 localhost podman[306928]: 2025-12-05 10:03:33.37485019 +0000 UTC m=+0.248878458 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:03:33 localhost podman[306928]: 2025-12-05 10:03:33.411936288 +0000 UTC m=+0.285964566 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS) Dec 5 05:03:33 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:03:33 localhost nova_compute[280228]: 2025-12-05 10:03:33.662 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.5 MiB/s wr, 42 op/s Dec 5 05:03:36 localhost nova_compute[280228]: 2025-12-05 10:03:36.540 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 2.0 MiB/s wr, 22 op/s Dec 5 05:03:38 localhost nova_compute[280228]: 2025-12-05 10:03:38.664 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.9 MiB/s wr, 20 op/s Dec 5 05:03:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 18 op/s Dec 5 05:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:03:41 localhost podman[306988]: 2025-12-05 10:03:41.208355626 +0000 UTC m=+0.092381183 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:03:41 localhost podman[306987]: 2025-12-05 10:03:41.24863698 +0000 UTC m=+0.134754561 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:03:41 localhost podman[306988]: 2025-12-05 10:03:41.274155732 +0000 UTC m=+0.158181279 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:03:41 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:03:41 localhost podman[306987]: 2025-12-05 10:03:41.344979622 +0000 UTC m=+0.231097193 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:03:41 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:03:41 localhost nova_compute[280228]: 2025-12-05 10:03:41.543 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail; 4.9 KiB/s rd, 1.7 MiB/s wr, 8 op/s Dec 5 05:03:43 localhost nova_compute[280228]: 2025-12-05 10:03:43.668 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:03:45 Dec 5 05:03:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:03:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:03:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_metadata', 'manila_data', 'volumes', '.mgr', 'images', 'backups', 'vms'] Dec 5 05:03:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:03:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:03:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 5 05:03:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:03:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:03:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:03:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:03:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:03:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:03:46 localhost nova_compute[280228]: 2025-12-05 10:03:46.577 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:48 localhost nova_compute[280228]: 2025-12-05 10:03:48.670 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:49 localhost podman[239519]: time="2025-12-05T10:03:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:03:49 localhost podman[239519]: @ - - [05/Dec/2025:10:03:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:03:49 localhost podman[239519]: @ - - [05/Dec/2025:10:03:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19228 "" "Go-http-client/1.1" Dec 5 05:03:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:51 localhost nova_compute[280228]: 2025-12-05 10:03:51.609 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:03:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:03:52 localhost systemd[1]: tmp-crun.eJEEvL.mount: Deactivated successfully. Dec 5 05:03:52 localhost podman[307034]: 2025-12-05 10:03:52.219104587 +0000 UTC m=+0.100001826 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125) Dec 5 05:03:52 localhost podman[307034]: 2025-12-05 10:03:52.263724365 +0000 UTC m=+0.144621574 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:03:52 localhost podman[307035]: 2025-12-05 10:03:52.273154583 +0000 UTC m=+0.147157780 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:03:52 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:03:52 localhost podman[307035]: 2025-12-05 10:03:52.285594155 +0000 UTC m=+0.159597262 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public) Dec 5 05:03:52 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:03:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:53 localhost nova_compute[280228]: 2025-12-05 10:03:53.673 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:03:54 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:03:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:03:54 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:03:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:03:54 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:03:54 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 8d12cef5-0490-49fc-a641-a2a2d45cd15b (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:03:54 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 8d12cef5-0490-49fc-a641-a2a2d45cd15b (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:03:54 localhost ceph-mgr[286454]: [progress INFO root] Completed event 8d12cef5-0490-49fc-a641-a2a2d45cd15b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:03:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:03:54 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:03:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:03:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:55 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:03:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:03:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:03:55 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:03:55 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:03:55 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:03:56 localhost nova_compute[280228]: 2025-12-05 10:03:56.612 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:03:57 localhost openstack_network_exporter[241668]: ERROR 10:03:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:03:57 localhost openstack_network_exporter[241668]: ERROR 10:03:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:03:57 localhost openstack_network_exporter[241668]: ERROR 10:03:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:03:57 localhost openstack_network_exporter[241668]: ERROR 10:03:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:03:57 localhost openstack_network_exporter[241668]: Dec 5 05:03:57 localhost openstack_network_exporter[241668]: ERROR 10:03:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:03:57 localhost openstack_network_exporter[241668]: Dec 5 05:03:57 localhost ovn_controller[153000]: 2025-12-05T10:03:57Z|00076|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory Dec 5 05:03:58 localhost nova_compute[280228]: 2025-12-05 10:03:58.674 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:03:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:01 localhost nova_compute[280228]: 2025-12-05 10:04:01.616 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:04:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1239661781' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:04:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:04:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1239661781' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:04:03 localhost nova_compute[280228]: 2025-12-05 10:04:03.676 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:03.910 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:03.911 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:03.911 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:04:04 localhost systemd[1]: tmp-crun.xh3OiS.mount: Deactivated successfully. Dec 5 05:04:04 localhost podman[307160]: 2025-12-05 10:04:04.264813911 +0000 UTC m=+0.135779712 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:04:04 localhost systemd[1]: tmp-crun.jAmaF0.mount: Deactivated successfully. Dec 5 05:04:04 localhost podman[307159]: 2025-12-05 10:04:04.275651513 +0000 UTC m=+0.148101160 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:04:04 localhost podman[307159]: 2025-12-05 10:04:04.31108771 +0000 UTC m=+0.183537387 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:04:04 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:04:04 localhost podman[307160]: 2025-12-05 10:04:04.349214938 +0000 UTC m=+0.220180789 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:04:04 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:04:04 localhost podman[307161]: 2025-12-05 10:04:04.4005332 +0000 UTC m=+0.266969883 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 05:04:04 localhost podman[307161]: 2025-12-05 10:04:04.410612639 +0000 UTC m=+0.277049352 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true) Dec 5 05:04:04 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:04:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:06 localhost nova_compute[280228]: 2025-12-05 10:04:06.620 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:07 localhost nova_compute[280228]: 2025-12-05 10:04:07.081 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:08 localhost nova_compute[280228]: 2025-12-05 10:04:08.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:08 localhost nova_compute[280228]: 2025-12-05 10:04:08.679 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:09 localhost nova_compute[280228]: 2025-12-05 10:04:09.528 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:09 localhost nova_compute[280228]: 2025-12-05 10:04:09.529 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:09 localhost nova_compute[280228]: 2025-12-05 10:04:09.529 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 05:04:09 localhost nova_compute[280228]: 2025-12-05 10:04:09.780 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.524 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.525 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.525 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.601 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.601 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.602 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.602 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:04:11 localhost nova_compute[280228]: 2025-12-05 10:04:11.623 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:12 localhost nova_compute[280228]: 2025-12-05 10:04:12.032 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:04:12 localhost nova_compute[280228]: 2025-12-05 10:04:12.072 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:04:12 localhost nova_compute[280228]: 2025-12-05 10:04:12.073 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:04:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:04:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:04:12 localhost systemd[1]: tmp-crun.g3J33u.mount: Deactivated successfully. Dec 5 05:04:12 localhost podman[307220]: 2025-12-05 10:04:12.211150656 +0000 UTC m=+0.094511378 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:04:12 localhost podman[307221]: 2025-12-05 10:04:12.286515095 +0000 UTC m=+0.166725411 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:04:12 localhost podman[307220]: 2025-12-05 10:04:12.316077051 +0000 UTC m=+0.199437783 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:04:12 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:04:12 localhost podman[307221]: 2025-12-05 10:04:12.329865334 +0000 UTC m=+0.210075680 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:04:12 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.950 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.951 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.986 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.987 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3ae833c2-2eff-405e-a66c-83dde094a0aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:12.951397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c0134f20-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '4863213034674f0c433e27344f2870e30170c9af202a8d69eebafad017e663ef'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:12.951397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c0136992-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': 'e2b816857844b845e24b0e99daaff8d966731696c212a129de1e5de604430c6e'}]}, 'timestamp': '2025-12-05 10:04:12.987901', '_unique_id': '6add5a5ccdc94f2daa82b6941bd583b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.989 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.991 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:04:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.996 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b3801f9b-708b-4a8a-8a36-78b1860c4d2e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:12.991602', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c014d49e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '2fa15b86a74a2a6a468ddcf53af454d6451041e575191afbc034e838ca74b97e'}]}, 'timestamp': '2025-12-05 10:04:12.997389', '_unique_id': 'fcbd19d0e5044e329472733ef35785eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:12.999 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.001 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.001 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14f38535-ea43-4dd5-a9c4-22601e6108bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.001873', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c015a2ac-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '16ef583bb813e940abb60358daecd450644344092f262736a8d8f161a2eed53b'}]}, 'timestamp': '2025-12-05 10:04:13.002563', '_unique_id': 'fc9739dd1ae94b999f01c9d70ceacac2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.005 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.005 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.005 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '29519f51-5e9f-4772-b38f-280f6d3c8f53', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.005394', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c0162826-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': 'cb368dce77fb9d4f16ebf6754a7d492a59b202fc83a15ffff96f1fa84c5791b7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.005394', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c0163866-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': 'ec966f38f36a0490e8320ee9a49d3acc50239c7291a3ff3a386d74e77ec9e234'}]}, 'timestamp': '2025-12-05 10:04:13.006238', '_unique_id': 'ed7ef88dafd04ac299e03e52691ca44f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.008 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.027 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 14980000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3053820f-e9ea-4190-9385-7efeb9fc219d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14980000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:04:13.008493', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'c019902e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.201744688, 'message_signature': '50851f9a934f1db24b8a8b982faacf08d4b0c45390b3cd323f99d618f465aa8c'}]}, 'timestamp': '2025-12-05 10:04:13.028209', '_unique_id': 'ecd0fb6a033b48dda8758592058f482d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.029 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4435b9e1-68b7-4fca-8685-79c43a0ca0f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.030846', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c01a0a72-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '283e16b09fd92e0b610a7d6f14a41620b55cdc6c4bcff32790aa604acc30ff6d'}]}, 'timestamp': '2025-12-05 10:04:13.031337', '_unique_id': 'fdf042d0bf6b48ca9ff9389fb1bb241f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '27d04853-2f7b-4dd5-9c44-2ba230c170ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.033584', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c01a7584-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '8f92d1f2fba2e08016fd07c7c043da347d1066d0e984af672e30e4790d0e036c'}]}, 'timestamp': '2025-12-05 10:04:13.034050', '_unique_id': '8692a10792794fd6b073d6fdcb317afa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.036 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9b005d9d-6022-41b0-bc75-d9cdba3a084f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.036204', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c01add26-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': 'e99cfbafb2615aaff44f1300e95c2bb8cbbd9583f51ca443fb6e360d559ec8b5'}]}, 'timestamp': '2025-12-05 10:04:13.036763', '_unique_id': 'f362c1922be845078d1d863c5ad6a977'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.038 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.038 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '90ddf3d5-18b3-41c7-a188-40683f395274', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.039093', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c01b4d38-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '6d0c8c77064ce819e2adf1f272f7aa9a84b7be95042ebdde51e06422d8e95d45'}]}, 'timestamp': '2025-12-05 10:04:13.039570', '_unique_id': '4cfbaf6b837e4aab9a6f897351c415cd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.041 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '470ab77b-b9c3-47d2-bccb-0b5c793e7498', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.041695', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c01bb1d8-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '8a5b7f56cb523d66305664d8cb0260a505298126d05a2d907077336641d7d6dc'}]}, 'timestamp': '2025-12-05 10:04:13.042145', '_unique_id': '9ea2a76f7c444a69a5e2c5880321d290'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.044 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '57747662-579c-4421-8f66-af9f020f16a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.044397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c01c1b78-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '416b853353d50ad7139078c867f1f133e5b79717a7284255538ff26121f712c8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.044397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c01c2b54-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': 'b62533582a7677101955403ac4941ee6202f24816e72f494d9f09f00b8aece65'}]}, 'timestamp': '2025-12-05 10:04:13.045223', '_unique_id': '1cb932529f89438497ef049ac8f869b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.047 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1d6c2e06-e06f-42a7-808c-4ac44980fb46', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.047630', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c01c9e86-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '9a4a30270baa2f9b6ffb7126e6b6c484ce235602d985848f7db2e2abcda1e63b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.047630', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c01cb696-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '64a3ca6a854084fc6909d8d4ccb229da84d6d6b2526fbe34283ffc17526d7af7'}]}, 'timestamp': '2025-12-05 10:04:13.048798', '_unique_id': 'ba3a5aa615a641ddb503a15143a449fb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.050 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.065 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5676ce64-fa57-4df7-9c61-ac3b315ddd3e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.050926', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c01f2ffc-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.225242298, 'message_signature': '07fb6b9890c2903f4fe7bc86d59adcce9bea76ec2d7b72bfd99f1a6bdad696dc'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.050926', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c01f441a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.225242298, 'message_signature': '810b2e8065420df6ea6bc02f102ad6d51689e303f1500e42fa948126e6e2e5dd'}]}, 'timestamp': '2025-12-05 10:04:13.065539', '_unique_id': '2bbdeb18ba54412486443d8fbf863076'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.067 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.068 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.068 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.068 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '00d009a8-dd1e-40df-b3a6-cab2612971bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.068278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c01fc160-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.225242298, 'message_signature': 'fa7d3b05c6d168bf8638aefd90ca5cfeece5edc401c7b6c3d05e16ddbfc73059'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.068278', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c01fd218-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.225242298, 'message_signature': 'cb95b3c3ec39138a3b42fbe54ea27b2b5e314b21912ada05e4fa63919db21e40'}]}, 'timestamp': '2025-12-05 10:04:13.069163', '_unique_id': '234b042580174b5f8ccdaf7ce79083e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '60b67c74-6a3a-46a0-a413-966da4ed7904', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:04:13.071682', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'c020472a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.201744688, 'message_signature': '38424438018dfafb5b1d2fd826a9ea13393d29a3a99287eb425d4d7f34c7627e'}]}, 'timestamp': '2025-12-05 10:04:13.072172', '_unique_id': 'c745cd8a161240458b3181552748fd12'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.074 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67bbd27e-ac3b-4547-9599-0dc6b6379630', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.074475', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c020b2be-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '2a583d91990bb1871ad5325e789329f8ab8256afef92c6f5fc1d4f7816ab091a'}]}, 'timestamp': '2025-12-05 10:04:13.074944', '_unique_id': '143e0e3ee0c14c47881ad0b4d2c2e678'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.075 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.077 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.077 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.077 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d0ca9c8-7817-47e7-923c-08ddf8e3a05d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.077132', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c0211b6e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.225242298, 'message_signature': '8bd6b5ec9be5b4715ae657962d4b61842e09e1cc412af9257e4fcb8f9d65f4ec'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.077132', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c0212db6-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.225242298, 'message_signature': 'b9299aa17ae8ff8314caa4b7e1e7a22f19f78239a3cd9e7c18dac8a4f96bcae5'}]}, 'timestamp': '2025-12-05 10:04:13.078067', '_unique_id': '038a702baffe4c8a9f6c9eeb409a87f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.080 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.080 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.081 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '344b12c9-fac7-4e6f-9cfa-596128d83192', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.080608', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c021a2b4-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '82a77839782e4febe3e69e4fa6c59b2a247d5db9228514fa67ec9760a83a17e5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.080608', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c021b47a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '1a4afcf2bf59d695516602d19d57711c23cc7a1398d9ab8a495412c5ca52d956'}]}, 'timestamp': '2025-12-05 10:04:13.081513', '_unique_id': '53386972f72f4ef6ba25ee17dcd0b729'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.083 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.083 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.085 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dbaf60ff-f096-40b7-86fb-d40109d1b4f0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:04:13.083764', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c02248d6-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '8a8ef5673a9b5617c2d69da19fc089fbefa777d1527277fa3084704e9956a6f8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:04:13.083764', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c022637a-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.125702887, 'message_signature': '550fcf7eb8f1c2cbe90ae5e43c9f52f10bff6d6e712f8f1bb6e524925665c2f6'}]}, 'timestamp': '2025-12-05 10:04:13.085994', '_unique_id': '699a7b87500541c9b20de93efd02c292'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.086 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.088 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.088 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '274124de-b80a-45fa-af2c-08e55605309e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.088197', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c022cc20-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '1c49a09cb557dc17c6f00d4a96d3b14c7312f552276d6c5854f011a5b5c75b52'}]}, 'timestamp': '2025-12-05 10:04:13.088696', '_unique_id': '72d4ad4f346d41e6b8de9ea59f2c644b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.090 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.090 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'abcb0134-66f6-4e8a-8f48-a8c7d147c9bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:04:13.090803', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'c023303e-d1c1-11f0-8ba6-fa163e982365', 'monotonic_time': 12168.165952251, 'message_signature': '9ceedbd727efed021d63d7d8e0029b6fee91682b6243a81265401c81442c6f8e'}]}, 'timestamp': '2025-12-05 10:04:13.091288', '_unique_id': '11c0ebe1c00e456198f4307bccfa4de6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:04:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:04:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.530 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.530 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.683 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:04:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/286176458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:04:13 localhost nova_compute[280228]: 2025-12-05 10:04:13.972 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.043 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.044 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.258 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.259 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11386MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.260 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.260 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.602 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.602 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.603 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:04:14 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:14.607 261902 INFO neutron.agent.linux.ip_lib [None req-8b16c036-1f33-41ed-bfe8-9fcc1c41b7b4 - - - - - -] Device tap21758fda-14 cannot be used as it has no MAC address#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.629 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:14 localhost kernel: device tap21758fda-14 entered promiscuous mode Dec 5 05:04:14 localhost NetworkManager[5960]: [1764929054.6406] manager: (tap21758fda-14): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Dec 5 05:04:14 localhost ovn_controller[153000]: 2025-12-05T10:04:14Z|00077|binding|INFO|Claiming lport 21758fda-1427-448d-a792-4daeda35aed6 for this chassis. Dec 5 05:04:14 localhost ovn_controller[153000]: 2025-12-05T10:04:14Z|00078|binding|INFO|21758fda-1427-448d-a792-4daeda35aed6: Claiming unknown Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.643 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:14 localhost systemd-udevd[307302]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:04:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:14.653 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b18d5894-62f7-4f8f-a24c-429b8805e981', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18d5894-62f7-4f8f-a24c-429b8805e981', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2976951b-7527-4195-bee8-6ac5e692e095, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=21758fda-1427-448d-a792-4daeda35aed6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:14.656 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 21758fda-1427-448d-a792-4daeda35aed6 in datapath b18d5894-62f7-4f8f-a24c-429b8805e981 bound to our chassis#033[00m Dec 5 05:04:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:14.659 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b18d5894-62f7-4f8f-a24c-429b8805e981 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:04:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:14.660 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6c4ebf-6bb7-4084-8c32-fc6c89a014bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:14 localhost journal[228791]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 5 05:04:14 localhost journal[228791]: hostname: np0005546419.localdomain Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost ovn_controller[153000]: 2025-12-05T10:04:14Z|00079|binding|INFO|Setting lport 21758fda-1427-448d-a792-4daeda35aed6 ovn-installed in OVS Dec 5 05:04:14 localhost ovn_controller[153000]: 2025-12-05T10:04:14Z|00080|binding|INFO|Setting lport 21758fda-1427-448d-a792-4daeda35aed6 up in Southbound Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.680 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost journal[228791]: ethtool ioctl error on tap21758fda-14: No such device Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.719 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.750 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:14 localhost nova_compute[280228]: 2025-12-05 10:04:14.867 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.011 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.012 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.032 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 05:04:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.062 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.074547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055074601, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2264, "num_deletes": 252, "total_data_size": 3827524, "memory_usage": 3940320, "flush_reason": "Manual Compaction"} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Dec 5 05:04:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055096425, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 3689873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21018, "largest_seqno": 23280, "table_properties": {"data_size": 3680505, "index_size": 5809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20760, "raw_average_key_size": 21, "raw_value_size": 3661228, "raw_average_value_size": 3758, "num_data_blocks": 249, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928883, "oldest_key_time": 1764928883, "file_creation_time": 1764929055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 21972 microseconds, and 9458 cpu microseconds. Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.096510) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 3689873 bytes OK Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.096547) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.098436) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.098463) EVENT_LOG_v1 {"time_micros": 1764929055098454, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.098492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3817904, prev total WAL file size 3817904, number of live WAL files 2. Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.099572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(3603KB)], [33(17MB)] Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055099625, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 22169664, "oldest_snapshot_seqno": -1} Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.103 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:04:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12150 keys, 19415225 bytes, temperature: kUnknown Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055176660, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 19415225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19344051, "index_size": 39713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30405, "raw_key_size": 323983, "raw_average_key_size": 26, "raw_value_size": 19135277, "raw_average_value_size": 1574, "num_data_blocks": 1523, "num_entries": 12150, "num_filter_entries": 12150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.176874) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 19415225 bytes Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.178752) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 287.5 rd, 251.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 17.6 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.3) OK, records in: 12684, records dropped: 534 output_compression: NoCompression Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.178836) EVENT_LOG_v1 {"time_micros": 1764929055178814, "job": 18, "event": "compaction_finished", "compaction_time_micros": 77103, "compaction_time_cpu_micros": 27448, "output_level": 6, "num_output_files": 1, "total_output_size": 19415225, "num_input_records": 12684, "num_output_records": 12150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055179504, "job": 18, "event": "table_file_deletion", "file_number": 35} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929055182182, "job": 18, "event": "table_file_deletion", "file_number": 33} Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.099460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.182283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.182293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.182298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.182302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:15 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:15.182306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:04:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:04:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:04:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:04:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:04:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2755924571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.531 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.537 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:04:15 localhost podman[307395]: Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.554 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:04:15 localhost podman[307395]: 2025-12-05 10:04:15.557211837 +0000 UTC m=+0.094235549 container create 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.583 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.583 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.585 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.585 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.599 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 05:04:15 localhost systemd[1]: Started libpod-conmon-0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65.scope. Dec 5 05:04:15 localhost podman[307395]: 2025-12-05 10:04:15.511715043 +0000 UTC m=+0.048738805 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:04:15 localhost systemd[1]: tmp-crun.5AN0x0.mount: Deactivated successfully. Dec 5 05:04:15 localhost systemd[1]: Started libcrun container. Dec 5 05:04:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b1f7ce21bc67143d9b3a843ba3a1f06176c79b58060d9e0d8354e704be732f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:04:15 localhost podman[307395]: 2025-12-05 10:04:15.663750783 +0000 UTC m=+0.200774495 container init 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:04:15 localhost podman[307395]: 2025-12-05 10:04:15.674234264 +0000 UTC m=+0.211257946 container start 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:04:15 localhost dnsmasq[307415]: started, version 2.85 cachesize 150 Dec 5 05:04:15 localhost dnsmasq[307415]: DNS service limited to local subnets Dec 5 05:04:15 localhost dnsmasq[307415]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:04:15 localhost dnsmasq[307415]: warning: no upstream servers configured Dec 5 05:04:15 localhost dnsmasq-dhcp[307415]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:04:15 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 0 addresses Dec 5 05:04:15 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:04:15 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:04:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:15.790 261902 INFO neutron.agent.dhcp.agent [None req-61c3b56e-640b-4e78-82f9-e2fdcc75b6b5 - - - - - -] DHCP configuration for ports {'e51b6bce-ae77-4f5a-a75d-40a16f97a07f'} is completed#033[00m Dec 5 05:04:15 localhost nova_compute[280228]: 2025-12-05 10:04:15.918 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:16 localhost nova_compute[280228]: 2025-12-05 10:04:16.599 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:16 localhost nova_compute[280228]: 2025-12-05 10:04:16.600 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:16 localhost nova_compute[280228]: 2025-12-05 10:04:16.600 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:16 localhost nova_compute[280228]: 2025-12-05 10:04:16.600 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:16 localhost nova_compute[280228]: 2025-12-05 10:04:16.601 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:04:16 localhost nova_compute[280228]: 2025-12-05 10:04:16.625 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:17 localhost nova_compute[280228]: 2025-12-05 10:04:17.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:04:18 localhost nova_compute[280228]: 2025-12-05 10:04:18.688 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:19 localhost podman[239519]: time="2025-12-05T10:04:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:04:19 localhost podman[239519]: @ - - [05/Dec/2025:10:04:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157935 "" "Go-http-client/1.1" Dec 5 05:04:19 localhost podman[239519]: @ - - [05/Dec/2025:10:04:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19711 "" "Go-http-client/1.1" Dec 5 05:04:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:20.230 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:20 localhost nova_compute[280228]: 2025-12-05 10:04:20.231 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:20.232 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:04:20 localhost nova_compute[280228]: 2025-12-05 10:04:20.474 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:21 localhost nova_compute[280228]: 2025-12-05 10:04:21.629 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:22.294 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:21Z, description=, device_id=8eaff57e-24d5-4b67-abc2-bff6079a50ab, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=119a4ba5-2f37-44f6-b413-f459e54d5221, ip_allocation=immediate, mac_address=fa:16:3e:1b:fc:81, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:12Z, description=, dns_domain=, id=b18d5894-62f7-4f8f-a24c-429b8805e981, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-410917755-network, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1990, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=263, status=ACTIVE, subnets=['f3b0851d-1124-487f-a4bc-c162f419eebc'], tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:13Z, vlan_transparent=None, network_id=b18d5894-62f7-4f8f-a24c-429b8805e981, port_security_enabled=False, project_id=a9b8ae2ff8fc42959dc64d209d5490df, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=367, status=DOWN, tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:22Z on network b18d5894-62f7-4f8f-a24c-429b8805e981#033[00m Dec 5 05:04:22 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 1 addresses Dec 5 05:04:22 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:04:22 localhost podman[307433]: 2025-12-05 10:04:22.491183463 +0000 UTC m=+0.064014893 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:04:22 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:04:22 localhost podman[307447]: 2025-12-05 10:04:22.628838002 +0000 UTC m=+0.097193929 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:04:22 localhost podman[307447]: 2025-12-05 10:04:22.664757513 +0000 UTC m=+0.133113450 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd) Dec 5 05:04:22 localhost podman[307448]: 2025-12-05 10:04:22.679221796 +0000 UTC m=+0.140189968 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:04:22 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:04:22 localhost podman[307448]: 2025-12-05 10:04:22.692424491 +0000 UTC m=+0.153392663 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7) Dec 5 05:04:22 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:04:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:22.762 261902 INFO neutron.agent.dhcp.agent [None req-b7c62ee1-2466-4091-af50-2b6d3e14c0a5 - - - - - -] DHCP configuration for ports {'119a4ba5-2f37-44f6-b413-f459e54d5221'} is completed#033[00m Dec 5 05:04:22 localhost nova_compute[280228]: 2025-12-05 10:04:22.782 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:23.235 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:23 localhost nova_compute[280228]: 2025-12-05 10:04:23.689 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:24 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:24.149 2 INFO neutron.agent.securitygroups_rpc [None req-cb57f573-169e-481a-9ba1-5f48d09f78aa 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']#033[00m Dec 5 05:04:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:24.442 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:21Z, description=, device_id=8eaff57e-24d5-4b67-abc2-bff6079a50ab, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=119a4ba5-2f37-44f6-b413-f459e54d5221, ip_allocation=immediate, mac_address=fa:16:3e:1b:fc:81, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:12Z, description=, dns_domain=, id=b18d5894-62f7-4f8f-a24c-429b8805e981, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-410917755-network, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1990, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=263, status=ACTIVE, subnets=['f3b0851d-1124-487f-a4bc-c162f419eebc'], tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:13Z, vlan_transparent=None, network_id=b18d5894-62f7-4f8f-a24c-429b8805e981, port_security_enabled=False, project_id=a9b8ae2ff8fc42959dc64d209d5490df, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=367, status=DOWN, tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:22Z on network b18d5894-62f7-4f8f-a24c-429b8805e981#033[00m Dec 5 05:04:24 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 1 addresses Dec 5 05:04:24 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:04:24 localhost podman[307509]: 2025-12-05 10:04:24.668901117 +0000 UTC m=+0.066268742 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:04:24 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:04:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:24.911 261902 INFO neutron.agent.dhcp.agent [None req-017ae6f3-9090-4dc9-a1d9-4dd8c5056cab - - - - - -] DHCP configuration for ports {'119a4ba5-2f37-44f6-b413-f459e54d5221'} is completed#033[00m Dec 5 05:04:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:25 localhost nova_compute[280228]: 2025-12-05 10:04:25.923 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:26.022 261902 INFO neutron.agent.linux.ip_lib [None req-ea9483c9-aa9a-450b-b187-8a822b0bef2f - - - - - -] Device tapaafa3ebf-ce cannot be used as it has no MAC address#033[00m Dec 5 05:04:26 localhost nova_compute[280228]: 2025-12-05 10:04:26.052 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:26 localhost kernel: device tapaafa3ebf-ce entered promiscuous mode Dec 5 05:04:26 localhost ovn_controller[153000]: 2025-12-05T10:04:26Z|00081|binding|INFO|Claiming lport aafa3ebf-cec7-439e-8778-62f88decb441 for this chassis. Dec 5 05:04:26 localhost ovn_controller[153000]: 2025-12-05T10:04:26Z|00082|binding|INFO|aafa3ebf-cec7-439e-8778-62f88decb441: Claiming unknown Dec 5 05:04:26 localhost NetworkManager[5960]: [1764929066.0634] manager: (tapaafa3ebf-ce): new Generic device (/org/freedesktop/NetworkManager/Devices/20) Dec 5 05:04:26 localhost nova_compute[280228]: 2025-12-05 10:04:26.064 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:26 localhost systemd-udevd[307540]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:04:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:26.076 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-aad70afe-d081-4664-830f-f96063eb6473', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aad70afe-d081-4664-830f-f96063eb6473', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef5ed654cd7b4f96b8295c2e76d3b3e2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a75a5212-fe7a-4283-ba5d-97f4d22a8b29, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=aafa3ebf-cec7-439e-8778-62f88decb441) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:26.078 158820 INFO neutron.agent.ovn.metadata.agent [-] Port aafa3ebf-cec7-439e-8778-62f88decb441 in datapath aad70afe-d081-4664-830f-f96063eb6473 bound to our chassis#033[00m Dec 5 05:04:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:26.081 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network aad70afe-d081-4664-830f-f96063eb6473 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:04:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:26.082 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[37fec96d-4be6-4437-9c3f-549a9ed6055a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost ovn_controller[153000]: 2025-12-05T10:04:26Z|00083|binding|INFO|Setting lport aafa3ebf-cec7-439e-8778-62f88decb441 ovn-installed in OVS Dec 5 05:04:26 localhost ovn_controller[153000]: 2025-12-05T10:04:26Z|00084|binding|INFO|Setting lport aafa3ebf-cec7-439e-8778-62f88decb441 up in Southbound Dec 5 05:04:26 localhost nova_compute[280228]: 2025-12-05 10:04:26.100 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost journal[228791]: ethtool ioctl error on tapaafa3ebf-ce: No such device Dec 5 05:04:26 localhost nova_compute[280228]: 2025-12-05 10:04:26.149 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:26 localhost nova_compute[280228]: 2025-12-05 10:04:26.177 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:26 localhost nova_compute[280228]: 2025-12-05 10:04:26.631 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:27 localhost podman[307611]: Dec 5 05:04:27 localhost podman[307611]: 2025-12-05 10:04:27.070499283 +0000 UTC m=+0.081730286 container create fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 05:04:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:27 localhost systemd[1]: Started libpod-conmon-fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6.scope. Dec 5 05:04:27 localhost systemd[1]: tmp-crun.NzuVg3.mount: Deactivated successfully. Dec 5 05:04:27 localhost podman[307611]: 2025-12-05 10:04:27.034634164 +0000 UTC m=+0.045865147 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:04:27 localhost systemd[1]: Started libcrun container. Dec 5 05:04:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276a3cea9b2045c1a8edb55fa25fccc518b1a8db3a7f07ffaf17255d8f8c4bae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:04:27 localhost podman[307611]: 2025-12-05 10:04:27.155172418 +0000 UTC m=+0.166403371 container init fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:04:27 localhost podman[307611]: 2025-12-05 10:04:27.166142794 +0000 UTC m=+0.177373777 container start fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 5 05:04:27 localhost dnsmasq[307629]: started, version 2.85 cachesize 150 Dec 5 05:04:27 localhost dnsmasq[307629]: DNS service limited to local subnets Dec 5 05:04:27 localhost dnsmasq[307629]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:04:27 localhost dnsmasq[307629]: warning: no upstream servers configured Dec 5 05:04:27 localhost dnsmasq-dhcp[307629]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:04:27 localhost dnsmasq[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/addn_hosts - 0 addresses Dec 5 05:04:27 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/host Dec 5 05:04:27 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/opts Dec 5 05:04:27 localhost openstack_network_exporter[241668]: ERROR 10:04:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:04:27 localhost openstack_network_exporter[241668]: ERROR 10:04:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:04:27 localhost openstack_network_exporter[241668]: ERROR 10:04:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:04:27 localhost openstack_network_exporter[241668]: ERROR 10:04:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:04:27 localhost openstack_network_exporter[241668]: Dec 5 05:04:27 localhost openstack_network_exporter[241668]: ERROR 10:04:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:04:27 localhost openstack_network_exporter[241668]: Dec 5 05:04:27 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:27.377 261902 INFO neutron.agent.dhcp.agent [None req-cc6772ec-060d-4b60-94e7-40208371b520 - - - - - -] DHCP configuration for ports {'a653dcef-d524-46f8-bb89-ed356917edab'} is completed#033[00m Dec 5 05:04:28 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:28.361 2 INFO neutron.agent.securitygroups_rpc [None req-48e9a8a2-2821-4767-a782-e295c5c4e972 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']#033[00m Dec 5 05:04:28 localhost nova_compute[280228]: 2025-12-05 10:04:28.692 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:30 localhost nova_compute[280228]: 2025-12-05 10:04:30.769 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:31 localhost nova_compute[280228]: 2025-12-05 10:04:31.636 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:32.355 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:31Z, description=, device_id=a285407f-c743-430b-b57a-edabf2fc84ad, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=08087557-7b85-42ab-865a-3d00fe954e94, ip_allocation=immediate, mac_address=fa:16:3e:7a:3a:bf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:23Z, description=, dns_domain=, id=aad70afe-d081-4664-830f-f96063eb6473, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-580620991-network, port_security_enabled=True, project_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28447, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=379, status=ACTIVE, subnets=['761c4f9d-c85a-4ac9-8b0c-d2de1d5963ab'], tags=[], tenant_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, updated_at=2025-12-05T10:04:25Z, vlan_transparent=None, network_id=aad70afe-d081-4664-830f-f96063eb6473, port_security_enabled=False, project_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=442, status=DOWN, tags=[], tenant_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, updated_at=2025-12-05T10:04:32Z on network aad70afe-d081-4664-830f-f96063eb6473#033[00m Dec 5 05:04:32 localhost dnsmasq[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/addn_hosts - 1 addresses Dec 5 05:04:32 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/host Dec 5 05:04:32 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/opts Dec 5 05:04:32 localhost podman[307647]: 2025-12-05 10:04:32.685293039 +0000 UTC m=+0.083535032 container kill fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:04:32 localhost systemd[1]: tmp-crun.EtfMM2.mount: Deactivated successfully. Dec 5 05:04:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:32.984 261902 INFO neutron.agent.dhcp.agent [None req-46e096af-8fd1-4faa-a26a-9e61335a8d53 - - - - - -] DHCP configuration for ports {'08087557-7b85-42ab-865a-3d00fe954e94'} is completed#033[00m Dec 5 05:04:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:33 localhost nova_compute[280228]: 2025-12-05 10:04:33.696 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:33 localhost nova_compute[280228]: 2025-12-05 10:04:33.928 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:34 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:34.882 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:31Z, description=, device_id=a285407f-c743-430b-b57a-edabf2fc84ad, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=08087557-7b85-42ab-865a-3d00fe954e94, ip_allocation=immediate, mac_address=fa:16:3e:7a:3a:bf, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:23Z, description=, dns_domain=, id=aad70afe-d081-4664-830f-f96063eb6473, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPsNegativeTestJSON-580620991-network, port_security_enabled=True, project_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28447, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=379, status=ACTIVE, subnets=['761c4f9d-c85a-4ac9-8b0c-d2de1d5963ab'], tags=[], tenant_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, updated_at=2025-12-05T10:04:25Z, vlan_transparent=None, network_id=aad70afe-d081-4664-830f-f96063eb6473, port_security_enabled=False, project_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=442, status=DOWN, tags=[], tenant_id=ef5ed654cd7b4f96b8295c2e76d3b3e2, updated_at=2025-12-05T10:04:32Z on network aad70afe-d081-4664-830f-f96063eb6473#033[00m Dec 5 05:04:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 145 MiB data, 671 MiB used, 41 GiB / 42 GiB avail Dec 5 05:04:35 localhost podman[307684]: 2025-12-05 10:04:35.101199403 +0000 UTC m=+0.062997142 container kill fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:04:35 localhost dnsmasq[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/addn_hosts - 1 addresses Dec 5 05:04:35 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/host Dec 5 05:04:35 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/opts Dec 5 05:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:04:35 localhost systemd[1]: tmp-crun.PnBU4c.mount: Deactivated successfully. Dec 5 05:04:35 localhost podman[307699]: 2025-12-05 10:04:35.231334731 +0000 UTC m=+0.103678108 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:04:35 localhost podman[307699]: 2025-12-05 10:04:35.240260105 +0000 UTC m=+0.112603412 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:04:35 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:04:35 localhost podman[307697]: 2025-12-05 10:04:35.329432928 +0000 UTC m=+0.204621182 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:04:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:35.357 261902 INFO neutron.agent.dhcp.agent [None req-63a478df-9b56-4f14-a0eb-c01b40211fbc - - - - - -] DHCP configuration for ports {'08087557-7b85-42ab-865a-3d00fe954e94'} is completed#033[00m Dec 5 05:04:35 localhost podman[307697]: 2025-12-05 10:04:35.364131711 +0000 UTC m=+0.239319975 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 5 05:04:35 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:04:35 localhost podman[307696]: 2025-12-05 10:04:35.37091546 +0000 UTC m=+0.249600961 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:04:35 localhost podman[307696]: 2025-12-05 10:04:35.450769486 +0000 UTC m=+0.329454937 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:04:35 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:04:36 localhost nova_compute[280228]: 2025-12-05 10:04:36.117 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:36 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:36.495 2 INFO neutron.agent.securitygroups_rpc [None req-83762cff-c285-4c7f-b0ea-dfcff003231b 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group rule updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']#033[00m Dec 5 05:04:36 localhost nova_compute[280228]: 2025-12-05 10:04:36.637 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s Dec 5 05:04:38 localhost nova_compute[280228]: 2025-12-05 10:04:38.698 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s Dec 5 05:04:39 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:39.247 261902 INFO neutron.agent.linux.ip_lib [None req-723a275e-a586-48fc-a2e8-2132ccd5de54 - - - - - -] Device tap2b95cbbe-bd cannot be used as it has no MAC address#033[00m Dec 5 05:04:39 localhost nova_compute[280228]: 2025-12-05 10:04:39.273 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:39 localhost kernel: device tap2b95cbbe-bd entered promiscuous mode Dec 5 05:04:39 localhost NetworkManager[5960]: [1764929079.2835] manager: (tap2b95cbbe-bd): new Generic device (/org/freedesktop/NetworkManager/Devices/21) Dec 5 05:04:39 localhost nova_compute[280228]: 2025-12-05 10:04:39.283 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:39 localhost systemd-udevd[307779]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:04:39 localhost ovn_controller[153000]: 2025-12-05T10:04:39Z|00085|binding|INFO|Claiming lport 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 for this chassis. Dec 5 05:04:39 localhost ovn_controller[153000]: 2025-12-05T10:04:39Z|00086|binding|INFO|2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7: Claiming unknown Dec 5 05:04:39 localhost ovn_controller[153000]: 2025-12-05T10:04:39Z|00087|binding|INFO|Setting lport 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 ovn-installed in OVS Dec 5 05:04:39 localhost nova_compute[280228]: 2025-12-05 10:04:39.329 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:39 localhost nova_compute[280228]: 2025-12-05 10:04:39.376 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:39 localhost nova_compute[280228]: 2025-12-05 10:04:39.403 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:39.584 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-99c8e076-e96f-4e1b-b373-8e23b3bbc3da', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99c8e076-e96f-4e1b-b373-8e23b3bbc3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38ca44ea29964cdc953c4acef5715d76', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c248ceac-4aac-40e0-9554-d26848fb6357, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:39 localhost ovn_controller[153000]: 2025-12-05T10:04:39Z|00088|binding|INFO|Setting lport 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 up in Southbound Dec 5 05:04:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:39.586 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 in datapath 99c8e076-e96f-4e1b-b373-8e23b3bbc3da bound to our chassis#033[00m Dec 5 05:04:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:39.589 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 99c8e076-e96f-4e1b-b373-8e23b3bbc3da or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:04:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:39.590 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[c38c10fd-18d2-4d18-80f1-b906a159b3c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:39 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:39.887 2 INFO neutron.agent.securitygroups_rpc [None req-9b0cbbee-3d9a-4a2a-b636-071077dfdc6c 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group rule updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']#033[00m Dec 5 05:04:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:40 localhost podman[307835]: Dec 5 05:04:40 localhost podman[307835]: 2025-12-05 10:04:40.258635861 +0000 UTC m=+0.094691183 container create ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 5 05:04:40 localhost systemd[1]: Started libpod-conmon-ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130.scope. Dec 5 05:04:40 localhost podman[307835]: 2025-12-05 10:04:40.214864099 +0000 UTC m=+0.050919411 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:04:40 localhost systemd[1]: Started libcrun container. Dec 5 05:04:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/579ad33473e355a80ccf53a2b9bc88d1d123512e62e0577480d5c2ab3d5fc065/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:04:40 localhost podman[307835]: 2025-12-05 10:04:40.345466112 +0000 UTC m=+0.181521424 container init ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:04:40 localhost podman[307835]: 2025-12-05 10:04:40.355101237 +0000 UTC m=+0.191156549 container start ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:04:40 localhost dnsmasq[307854]: started, version 2.85 cachesize 150 Dec 5 05:04:40 localhost dnsmasq[307854]: DNS service limited to local subnets Dec 5 05:04:40 localhost dnsmasq[307854]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:04:40 localhost dnsmasq[307854]: warning: no upstream servers configured Dec 5 05:04:40 localhost dnsmasq-dhcp[307854]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:04:40 localhost dnsmasq[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/addn_hosts - 0 addresses Dec 5 05:04:40 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/host Dec 5 05:04:40 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/opts Dec 5 05:04:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:40.558 261902 INFO neutron.agent.dhcp.agent [None req-b9e6141b-4701-4ca3-8614-44296770578b - - - - - -] DHCP configuration for ports {'9fb97e82-c416-498d-a266-57a150ac5cc3'} is completed#033[00m Dec 5 05:04:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 205 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 99 op/s Dec 5 05:04:41 localhost nova_compute[280228]: 2025-12-05 10:04:41.640 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 141 op/s Dec 5 05:04:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:04:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:04:43 localhost podman[307874]: 2025-12-05 10:04:43.194034296 +0000 UTC m=+0.064231770 container kill fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:04:43 localhost dnsmasq[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/addn_hosts - 0 addresses Dec 5 05:04:43 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/host Dec 5 05:04:43 localhost dnsmasq-dhcp[307629]: read /var/lib/neutron/dhcp/aad70afe-d081-4664-830f-f96063eb6473/opts Dec 5 05:04:43 localhost podman[307870]: 2025-12-05 10:04:43.194941215 +0000 UTC m=+0.076867448 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:04:43 localhost podman[307872]: 2025-12-05 10:04:43.283597571 +0000 UTC m=+0.156933031 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:04:43 localhost podman[307870]: 2025-12-05 10:04:43.292781643 +0000 UTC m=+0.174707866 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller) Dec 5 05:04:43 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:04:43 localhost podman[307872]: 2025-12-05 10:04:43.318715278 +0000 UTC m=+0.192050688 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:04:43 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:04:43 localhost nova_compute[280228]: 2025-12-05 10:04:43.369 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:43 localhost kernel: device tapaafa3ebf-ce left promiscuous mode Dec 5 05:04:43 localhost ovn_controller[153000]: 2025-12-05T10:04:43Z|00089|binding|INFO|Releasing lport aafa3ebf-cec7-439e-8778-62f88decb441 from this chassis (sb_readonly=0) Dec 5 05:04:43 localhost ovn_controller[153000]: 2025-12-05T10:04:43Z|00090|binding|INFO|Setting lport aafa3ebf-cec7-439e-8778-62f88decb441 down in Southbound Dec 5 05:04:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:43.382 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-aad70afe-d081-4664-830f-f96063eb6473', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aad70afe-d081-4664-830f-f96063eb6473', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef5ed654cd7b4f96b8295c2e76d3b3e2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a75a5212-fe7a-4283-ba5d-97f4d22a8b29, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=aafa3ebf-cec7-439e-8778-62f88decb441) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:43.384 158820 INFO neutron.agent.ovn.metadata.agent [-] Port aafa3ebf-cec7-439e-8778-62f88decb441 in datapath aad70afe-d081-4664-830f-f96063eb6473 unbound from our chassis#033[00m Dec 5 05:04:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:43.393 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aad70afe-d081-4664-830f-f96063eb6473, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:04:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:43.396 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b209e0b0-5d64-4c61-9b58-004048db0b60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:43 localhost nova_compute[280228]: 2025-12-05 10:04:43.398 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:43 localhost nova_compute[280228]: 2025-12-05 10:04:43.418 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:43 localhost nova_compute[280228]: 2025-12-05 10:04:43.700 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:44.713 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:44Z, description=, device_id=bf433541-fff2-4112-8fd2-d751e37bb99f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=efcb7c9d-fe92-43a0-b576-eb01df2657a6, ip_allocation=immediate, mac_address=fa:16:3e:7a:12:7f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:33Z, description=, dns_domain=, id=99c8e076-e96f-4e1b-b373-8e23b3bbc3da, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-631044796-network, port_security_enabled=True, project_id=38ca44ea29964cdc953c4acef5715d76, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59032, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=450, status=ACTIVE, subnets=['0d835014-54f6-47af-bc33-694c585d22b8'], tags=[], tenant_id=38ca44ea29964cdc953c4acef5715d76, updated_at=2025-12-05T10:04:35Z, vlan_transparent=None, network_id=99c8e076-e96f-4e1b-b373-8e23b3bbc3da, port_security_enabled=False, project_id=38ca44ea29964cdc953c4acef5715d76, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=485, status=DOWN, tags=[], tenant_id=38ca44ea29964cdc953c4acef5715d76, updated_at=2025-12-05T10:04:44Z on network 99c8e076-e96f-4e1b-b373-8e23b3bbc3da#033[00m Dec 5 05:04:44 localhost dnsmasq[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/addn_hosts - 1 addresses Dec 5 05:04:44 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/host Dec 5 05:04:44 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/opts Dec 5 05:04:44 localhost podman[307962]: 2025-12-05 10:04:44.949640163 +0000 UTC m=+0.065047395 container kill ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.004 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.005 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.029 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Dec 5 05:04:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:04:45 Dec 5 05:04:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:04:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:04:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_data', 'images', 'vms', 'volumes', 'manila_metadata', 'backups', '.mgr'] Dec 5 05:04:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:04:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 141 op/s Dec 5 05:04:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:04:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006301581693188163 of space, bias 1.0, pg target 1.2603163386376326 quantized to 32 (current 32) Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:04:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:04:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:04:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:04:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:04:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:04:45 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:45.237 261902 INFO neutron.agent.dhcp.agent [None req-ff944580-0fa6-43c0-a1fe-76b46f8a2bf9 - - - - - -] DHCP configuration for ports {'efcb7c9d-fe92-43a0-b576-eb01df2657a6'} is completed#033[00m Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:04:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.266 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.267 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.273 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.274 280232 INFO nova.compute.claims [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Claim successful on node np0005546419.localdomain#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.409 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:04:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1647312907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.862 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.867 280232 DEBUG nova.compute.provider_tree [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.883 280232 DEBUG nova.scheduler.client.report [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.902 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.903 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.945 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.946 280232 DEBUG nova.network.neutron [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.958 280232 INFO nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Dec 5 05:04:45 localhost nova_compute[280228]: 2025-12-05 10:04:45.972 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.066 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.067 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.068 280232 INFO nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Creating image(s)#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.101 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.137 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.175 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.181 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.182 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.222 280232 DEBUG nova.virt.libvirt.imagebackend [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Image locations are: [{'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/3647d20f-5e09-41b2-a6f3-f320b9e4e343/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/3647d20f-5e09-41b2-a6f3-f320b9e4e343/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.301 280232 WARNING oslo_policy.policy [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.302 280232 WARNING oslo_policy.policy [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.308 280232 DEBUG nova.policy [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ee4999d08044f63bf075e92f0ca5d11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41095831ac6247b0a5ea030490af998f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Dec 5 05:04:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:46.496 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:44Z, description=, device_id=bf433541-fff2-4112-8fd2-d751e37bb99f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=efcb7c9d-fe92-43a0-b576-eb01df2657a6, ip_allocation=immediate, mac_address=fa:16:3e:7a:12:7f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:33Z, description=, dns_domain=, id=99c8e076-e96f-4e1b-b373-8e23b3bbc3da, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-631044796-network, port_security_enabled=True, project_id=38ca44ea29964cdc953c4acef5715d76, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59032, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=450, status=ACTIVE, subnets=['0d835014-54f6-47af-bc33-694c585d22b8'], tags=[], tenant_id=38ca44ea29964cdc953c4acef5715d76, updated_at=2025-12-05T10:04:35Z, vlan_transparent=None, network_id=99c8e076-e96f-4e1b-b373-8e23b3bbc3da, port_security_enabled=False, project_id=38ca44ea29964cdc953c4acef5715d76, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=485, status=DOWN, tags=[], tenant_id=38ca44ea29964cdc953c4acef5715d76, updated_at=2025-12-05T10:04:44Z on network 99c8e076-e96f-4e1b-b373-8e23b3bbc3da#033[00m Dec 5 05:04:46 localhost nova_compute[280228]: 2025-12-05 10:04:46.644 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:46 localhost podman[308077]: 2025-12-05 10:04:46.752521708 +0000 UTC m=+0.067175580 container kill ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:04:46 localhost dnsmasq[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/addn_hosts - 1 addresses Dec 5 05:04:46 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/host Dec 5 05:04:46 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/opts Dec 5 05:04:46 localhost systemd[1]: tmp-crun.ZlCyM8.mount: Deactivated successfully. Dec 5 05:04:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:47.032 261902 INFO neutron.agent.dhcp.agent [None req-7509b2a3-525e-4246-ae7b-16d000402056 - - - - - -] DHCP configuration for ports {'efcb7c9d-fe92-43a0-b576-eb01df2657a6'} is completed#033[00m Dec 5 05:04:47 localhost ovn_controller[153000]: 2025-12-05T10:04:47Z|00091|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:04:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 141 op/s Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.153 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.174 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.228 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.231 280232 DEBUG nova.virt.images [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] 3647d20f-5e09-41b2-a6f3-f320b9e4e343 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.233 280232 DEBUG nova.privsep.utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.234 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.405 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.part /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.410 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.485 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.488 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "803b7e0e18f6b644279a18f87a62b7eb9e1015e6" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 1.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.532 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:47 localhost nova_compute[280228]: 2025-12-05 10:04:47.542 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:47 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:47.925 2 INFO neutron.agent.securitygroups_rpc [req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 req-81daabdd-a902-4eca-b1c0-004b68779d1e 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group member updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.123 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/803b7e0e18f6b644279a18f87a62b7eb9e1015e6 fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:48 localhost dnsmasq[307629]: exiting on receipt of SIGTERM Dec 5 05:04:48 localhost podman[308171]: 2025-12-05 10:04:48.168722492 +0000 UTC m=+0.047125135 container kill fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:04:48 localhost systemd[1]: libpod-fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6.scope: Deactivated successfully. Dec 5 05:04:48 localhost podman[308201]: 2025-12-05 10:04:48.223031147 +0000 UTC m=+0.042656049 container died fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:04:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6-userdata-shm.mount: Deactivated successfully. Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.257 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] resizing rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Dec 5 05:04:48 localhost podman[308201]: 2025-12-05 10:04:48.258352589 +0000 UTC m=+0.077977441 container cleanup fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:04:48 localhost systemd[1]: libpod-conmon-fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6.scope: Deactivated successfully. Dec 5 05:04:48 localhost podman[308203]: 2025-12-05 10:04:48.305401622 +0000 UTC m=+0.120556246 container remove fdc0ce33ca7f7e3779322ff2974ed9e20e38ea41b80e3916023863b455fc03f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aad70afe-d081-4664-830f-f96063eb6473, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.494 280232 DEBUG nova.objects.instance [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lazy-loading 'migration_context' on Instance uuid fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.509 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.510 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Ensure instance console log exists: /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.511 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.511 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.512 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:48.652 261902 INFO neutron.agent.dhcp.agent [None req-94345071-9c83-4224-bf56-d543d83d6560 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.701 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:48 localhost nova_compute[280228]: 2025-12-05 10:04:48.879 280232 DEBUG nova.network.neutron [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Successfully created port: 24f19dd4-108e-4a77-b44d-59a215801baa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m Dec 5 05:04:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:49.006 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:04:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 177 active+clean; 238 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 100 op/s Dec 5 05:04:49 localhost systemd[1]: var-lib-containers-storage-overlay-276a3cea9b2045c1a8edb55fa25fccc518b1a8db3a7f07ffaf17255d8f8c4bae-merged.mount: Deactivated successfully. Dec 5 05:04:49 localhost systemd[1]: run-netns-qdhcp\x2daad70afe\x2dd081\x2d4664\x2d830f\x2df96063eb6473.mount: Deactivated successfully. Dec 5 05:04:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:49.793 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:04:49 localhost podman[239519]: time="2025-12-05T10:04:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:04:49 localhost podman[239519]: @ - - [05/Dec/2025:10:04:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159759 "" "Go-http-client/1.1" Dec 5 05:04:49 localhost podman[239519]: @ - - [05/Dec/2025:10:04:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20201 "" "Go-http-client/1.1" Dec 5 05:04:50 localhost nova_compute[280228]: 2025-12-05 10:04:50.063 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:50.903 2 INFO neutron.agent.securitygroups_rpc [req-4fcd9471-764f-4413-a3d6-c9db510ad3ec req-3be5ff8b-30c2-4669-8ea2-ebd3ceebb30b 332193d57d0f40b4a4331c53909cd01e 38ca44ea29964cdc953c4acef5715d76 - - default default] Security group rule updated ['d04b003d-84d7-4ef3-bd89-909ee44f1f42']#033[00m Dec 5 05:04:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 269 MiB data, 883 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 3.2 MiB/s wr, 168 op/s Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.407 280232 DEBUG nova.network.neutron [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Successfully updated port: 24f19dd4-108e-4a77-b44d-59a215801baa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.490 280232 DEBUG nova.compute.manager [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-changed-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.490 280232 DEBUG nova.compute.manager [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Refreshing instance network info cache due to event network-changed-24f19dd4-108e-4a77-b44d-59a215801baa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.491 280232 DEBUG oslo_concurrency.lockutils [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.491 280232 DEBUG oslo_concurrency.lockutils [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquired lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.491 280232 DEBUG nova.network.neutron [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Refreshing network info cache for port 24f19dd4-108e-4a77-b44d-59a215801baa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.492 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.565 280232 DEBUG nova.network.neutron [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 5 05:04:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:51.613 2 INFO neutron.agent.securitygroups_rpc [req-661adfa3-5841-49b7-bd34-e3e89bc27cd4 req-b26eebef-19af-47b2-81cf-3102a5d50f45 332193d57d0f40b4a4331c53909cd01e 38ca44ea29964cdc953c4acef5715d76 - - default default] Security group rule updated ['d04b003d-84d7-4ef3-bd89-909ee44f1f42']#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.647 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.811 280232 DEBUG nova.network.neutron [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.832 280232 DEBUG oslo_concurrency.lockutils [req-02cfbc35-aba9-450d-b1e0-b60d447ad366 req-595174cd-9221-4919-929b-748806a80cea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Releasing lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.833 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquired lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.833 280232 DEBUG nova.network.neutron [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 5 05:04:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:51.840 2 INFO neutron.agent.securitygroups_rpc [None req-4221ce4d-d911-4b23-95b4-1da9650671e2 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']#033[00m Dec 5 05:04:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:51.886 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:51Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6, ip_allocation=immediate, mac_address=fa:16:3e:57:e9:2f, name=tempest-parent-1901059839, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:12Z, description=, dns_domain=, id=b18d5894-62f7-4f8f-a24c-429b8805e981, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-410917755-network, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1990, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=263, status=ACTIVE, subnets=['f3b0851d-1124-487f-a4bc-c162f419eebc'], tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:13Z, vlan_transparent=None, network_id=b18d5894-62f7-4f8f-a24c-429b8805e981, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['8c9500c3-6ac9-452e-a652-72bddc07be6d'], standard_attr_id=496, status=DOWN, tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:51Z on network b18d5894-62f7-4f8f-a24c-429b8805e981#033[00m Dec 5 05:04:51 localhost nova_compute[280228]: 2025-12-05 10:04:51.906 280232 DEBUG nova.network.neutron [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 5 05:04:52 localhost systemd[1]: tmp-crun.tDfLxm.mount: Deactivated successfully. Dec 5 05:04:52 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 2 addresses Dec 5 05:04:52 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:04:52 localhost podman[308301]: 2025-12-05 10:04:52.101052602 +0000 UTC m=+0.048095084 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:04:52 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:04:52 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:52.324 261902 INFO neutron.agent.dhcp.agent [None req-bc0e17f5-9e64-420b-bf8e-ba6dd207062d - - - - - -] DHCP configuration for ports {'63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6'} is completed#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.520 280232 DEBUG nova.network.neutron [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Updating instance_info_cache with network_info: [{"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.538 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Releasing lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.539 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Instance network_info: |[{"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.543 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Start _get_guest_xml network_info=[{"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:03:24Z,direct_url=,disk_format='qcow2',id=3647d20f-5e09-41b2-a6f3-f320b9e4e343,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e6ca8a92050741d3a93772e6c1b0d704',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-05T10:03:26Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_type': 'disk', 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'image_id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.549 280232 WARNING nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.552 280232 DEBUG nova.virt.libvirt.host [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Searching host: 'np0005546419.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.553 280232 DEBUG nova.virt.libvirt.host [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.555 280232 DEBUG nova.virt.libvirt.host [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Searching host: 'np0005546419.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.556 280232 DEBUG nova.virt.libvirt.host [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.557 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.557 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T10:03:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='445199a6-1f73-405e-82f4-8bd8c4bb34c6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T10:03:24Z,direct_url=,disk_format='qcow2',id=3647d20f-5e09-41b2-a6f3-f320b9e4e343,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e6ca8a92050741d3a93772e6c1b0d704',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-05T10:03:26Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.558 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.558 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.559 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.559 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.560 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.560 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.561 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.561 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.561 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.562 280232 DEBUG nova.virt.hardware [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 5 05:04:52 localhost nova_compute[280228]: 2025-12-05 10:04:52.567 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:04:53 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/352553780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.015 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.056 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.063 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:04:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 317 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 148 op/s Dec 5 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:04:53 localhost podman[308364]: 2025-12-05 10:04:53.255613618 +0000 UTC m=+0.136399122 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64) Dec 5 05:04:53 localhost podman[308364]: 2025-12-05 10:04:53.263637384 +0000 UTC m=+0.144422848 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container) Dec 5 05:04:53 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:04:53 localhost podman[308363]: 2025-12-05 10:04:53.229503238 +0000 UTC m=+0.113126899 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:04:53 localhost podman[308363]: 2025-12-05 10:04:53.309078506 +0000 UTC m=+0.192702127 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:04:53 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.537 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.539 280232 DEBUG nova.virt.libvirt.vif [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005546419.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=8,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzw2WthzVtIu/DY1cGzri7BzSxU7HmCS+BoNdsA+GCtf3cs/xodQfr3DgzYnuyWAEHd/1zmcc7jWm6fN8dTApTgVQsczPN9oKYNm0Ya0z7VFTAoq2drLjEEcX3knjQC2Q==',key_name='tempest-keypair-1076229536',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41095831ac6247b0a5ea030490af998f',ramdisk_id='',reservation_id='r-ofrvdd2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1788105697',owner_user_name='tempest-ServersV294TestFqdnHostnames-1788105697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:04:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7ee4999d08044f63bf075e92f0ca5d11',uuid=fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.539 280232 DEBUG nova.network.os_vif_util [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Converting VIF {"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.540 280232 DEBUG nova.network.os_vif_util [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.542 280232 DEBUG nova.objects.instance [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lazy-loading 'pci_devices' on Instance uuid fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.590 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] End _get_guest_xml xml= Dec 5 05:04:53 localhost nova_compute[280228]: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 Dec 5 05:04:53 localhost nova_compute[280228]: instance-00000008 Dec 5 05:04:53 localhost nova_compute[280228]: 131072 Dec 5 05:04:53 localhost nova_compute[280228]: 1 Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: guest-instance-1 Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:52 Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: 128 Dec 5 05:04:53 localhost nova_compute[280228]: 1 Dec 5 05:04:53 localhost nova_compute[280228]: 0 Dec 5 05:04:53 localhost nova_compute[280228]: 0 Dec 5 05:04:53 localhost nova_compute[280228]: 1 Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: tempest-ServersV294TestFqdnHostnames-1788105697-project-member Dec 5 05:04:53 localhost nova_compute[280228]: tempest-ServersV294TestFqdnHostnames-1788105697 Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: RDO Dec 5 05:04:53 localhost nova_compute[280228]: OpenStack Compute Dec 5 05:04:53 localhost nova_compute[280228]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 5 05:04:53 localhost nova_compute[280228]: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 Dec 5 05:04:53 localhost nova_compute[280228]: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 Dec 5 05:04:53 localhost nova_compute[280228]: Virtual Machine Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: hvm Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: /dev/urandom Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: Dec 5 05:04:53 localhost nova_compute[280228]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.591 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Preparing to wait for external event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.591 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.592 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.592 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.593 280232 DEBUG nova.virt.libvirt.vif [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T10:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005546419.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=8,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzw2WthzVtIu/DY1cGzri7BzSxU7HmCS+BoNdsA+GCtf3cs/xodQfr3DgzYnuyWAEHd/1zmcc7jWm6fN8dTApTgVQsczPN9oKYNm0Ya0z7VFTAoq2drLjEEcX3knjQC2Q==',key_name='tempest-keypair-1076229536',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41095831ac6247b0a5ea030490af998f',ramdisk_id='',reservation_id='r-ofrvdd2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1788105697',owner_user_name='tempest-ServersV294TestFqdnHostnames-1788105697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T10:04:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7ee4999d08044f63bf075e92f0ca5d11',uuid=fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.594 280232 DEBUG nova.network.os_vif_util [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Converting VIF {"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.595 280232 DEBUG nova.network.os_vif_util [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.595 280232 DEBUG os_vif [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.596 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.597 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.598 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.603 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.603 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24f19dd4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.604 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24f19dd4-10, col_values=(('external_ids', {'iface-id': '24f19dd4-108e-4a77-b44d-59a215801baa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:5d:78', 'vm-uuid': 'fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.606 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.609 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.612 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.614 280232 INFO os_vif [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10')#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.735 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.735 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.736 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] No VIF found with MAC fa:16:3e:5a:5d:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.737 280232 INFO nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Using config drive#033[00m Dec 5 05:04:53 localhost nova_compute[280228]: 2025-12-05 10:04:53.780 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.117 280232 INFO nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Creating config drive at /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/disk.config#033[00m Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.125 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zi8krhy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e93 do_prune osdmap full prune enabled Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.253 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zi8krhy" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e94 e94: 6 total, 6 up, 6 in Dec 5 05:04:54 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e94: 6 total, 6 up, 6 in Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.308 280232 DEBUG nova.storage.rbd_utils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] rbd image fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.313 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/disk.config fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.555 280232 DEBUG oslo_concurrency.processutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/disk.config fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.556 280232 INFO nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Deleting local config drive /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71/disk.config because it was imported into RBD.#033[00m Dec 5 05:04:54 localhost systemd[1]: Started libvirt secret daemon. Dec 5 05:04:54 localhost kernel: device tap24f19dd4-10 entered promiscuous mode Dec 5 05:04:54 localhost NetworkManager[5960]: [1764929094.6867] manager: (tap24f19dd4-10): new Tun device (/org/freedesktop/NetworkManager/Devices/22) Dec 5 05:04:54 localhost ovn_controller[153000]: 2025-12-05T10:04:54Z|00092|binding|INFO|Claiming lport 24f19dd4-108e-4a77-b44d-59a215801baa for this chassis. Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.688 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:54 localhost ovn_controller[153000]: 2025-12-05T10:04:54Z|00093|binding|INFO|24f19dd4-108e-4a77-b44d-59a215801baa: Claiming fa:16:3e:5a:5d:78 10.100.0.9 Dec 5 05:04:54 localhost systemd-udevd[308510]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.701 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:5d:78 10.100.0.9'], port_security=['fa:16:3e:5a:5d:78 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64267419-8c47-450f-9ba4-afc8c103bf71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41095831ac6247b0a5ea030490af998f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13f09786-c3de-4f80-a431-bd4239c2ee01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9200024c-1bb2-4d9b-96df-67796d72a9e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=24f19dd4-108e-4a77-b44d-59a215801baa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.704 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 24f19dd4-108e-4a77-b44d-59a215801baa in datapath 64267419-8c47-450f-9ba4-afc8c103bf71 bound to our chassis#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.711 158820 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64267419-8c47-450f-9ba4-afc8c103bf71#033[00m Dec 5 05:04:54 localhost NetworkManager[5960]: [1764929094.7189] device (tap24f19dd4-10): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 05:04:54 localhost NetworkManager[5960]: [1764929094.7197] device (tap24f19dd4-10): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 5 05:04:54 localhost ovn_controller[153000]: 2025-12-05T10:04:54Z|00094|binding|INFO|Setting lport 24f19dd4-108e-4a77-b44d-59a215801baa ovn-installed in OVS Dec 5 05:04:54 localhost ovn_controller[153000]: 2025-12-05T10:04:54Z|00095|binding|INFO|Setting lport 24f19dd4-108e-4a77-b44d-59a215801baa up in Southbound Dec 5 05:04:54 localhost nova_compute[280228]: 2025-12-05 10:04:54.724 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.727 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef8f484-3747-47bc-b7b5-e6cde4d885d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.728 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64267419-81 in ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.731 158926 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64267419-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.731 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f517f4-1840-4e14-a688-42e8c27a1338]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.733 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7286c4-0cb2-43c5-b0ca-feaf2fa89658]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost systemd-machined[83348]: New machine qemu-3-instance-00000008. Dec 5 05:04:54 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000008. Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.746 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[95872076-e78f-4378-beb0-661b202edaba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.772 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[11b2d9c7-93a0-4182-8d7e-615657cc0a72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.805 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[1a485aa6-3050-4455-ac0e-f332548784d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost NetworkManager[5960]: [1764929094.8133] manager: (tap64267419-80): new Veth device (/org/freedesktop/NetworkManager/Devices/23) Dec 5 05:04:54 localhost systemd-udevd[308515]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.814 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d7f935-ddbb-404b-b237-32a483967654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.839 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c28b8d-b40e-4a58-a6e8-70feeca18c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.849 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[59ee5192-951f-4a74-95e4-0d3db063a063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost NetworkManager[5960]: [1764929094.8707] device (tap64267419-80): carrier: link connected Dec 5 05:04:54 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap64267419-81: link becomes ready Dec 5 05:04:54 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap64267419-80: link becomes ready Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.876 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3e4239-2ba9-43c9-bbd2-c845453e1bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.893 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[12451285-ec08-4e5a-be0b-8c2a15cfd6fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64267419-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:d7:22:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220997, 'reachable_time': 20785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308593, 'error': None, 'target': 'ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.904 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3bce6b33-a902-4175-a4d6-ac309f9d36c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:22e2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1220997, 'tstamp': 1220997}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308597, 'error': None, 'target': 'ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.914 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[90c8924a-9f26-4be8-a5d0-c7a6dec7c69d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64267419-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:d7:22:e2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220997, 'reachable_time': 20785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308599, 'error': None, 'target': 'ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.936 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7d10d067-e052-4232-b02c-4b8643fdbb9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.977 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[62ea5484-9134-46a5-a710-da1d1d2d99e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.980 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64267419-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.980 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 05:04:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:54.980 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64267419-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:55 localhost kernel: device tap64267419-80 entered promiscuous mode Dec 5 05:04:55 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.021 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.023 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.025 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64267419-80, col_values=(('external_ids', {'iface-id': '8dc3951d-9e6b-4dd7-9953-6042801ec206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.028 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost ovn_controller[153000]: 2025-12-05T10:04:55Z|00096|binding|INFO|Releasing lport 8dc3951d-9e6b-4dd7-9953-6042801ec206 from this chassis (sb_readonly=0) Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.035 158820 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64267419-8c47-450f-9ba4-afc8c103bf71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64267419-8c47-450f-9ba4-afc8c103bf71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.036 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[cc22f08d-30ce-499a-93ce-b95b801438f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.037 158820 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: global Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: log /dev/log local0 debug Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: log-tag haproxy-metadata-proxy-64267419-8c47-450f-9ba4-afc8c103bf71 Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: user root Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: group root Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: maxconn 1024 Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: pidfile /var/lib/neutron/external/pids/64267419-8c47-450f-9ba4-afc8c103bf71.pid.haproxy Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: daemon Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: defaults Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: log global Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: mode http Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: option httplog Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: option dontlognull Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: option http-server-close Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: option forwardfor Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: retries 3 Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: timeout http-request 30s Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: timeout connect 30s Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: timeout client 32s Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: timeout server 32s Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: timeout http-keep-alive 30s Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: listen listener Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: bind 169.254.169.254:80 Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: server metadata /var/lib/neutron/metadata_proxy Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: http-request add-header X-OVN-Network-ID 64267419-8c47-450f-9ba4-afc8c103bf71 Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.038 158820 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71', 'env', 'PROCESS_TAG=haproxy-64267419-8c47-450f-9ba4-afc8c103bf71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64267419-8c47-450f-9ba4-afc8c103bf71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.042 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.095447) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095095468, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 667, "num_deletes": 257, "total_data_size": 421543, "memory_usage": 435368, "flush_reason": "Manual Compaction"} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095100027, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 412173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23281, "largest_seqno": 23947, "table_properties": {"data_size": 408989, "index_size": 1103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7679, "raw_average_key_size": 18, "raw_value_size": 402290, "raw_average_value_size": 983, "num_data_blocks": 49, "num_entries": 409, "num_filter_entries": 409, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929056, "oldest_key_time": 1764929056, "file_creation_time": 1764929095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4602 microseconds, and 918 cpu microseconds. Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.100049) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 412173 bytes OK Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.100061) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.102421) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.102432) EVENT_LOG_v1 {"time_micros": 1764929095102428, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.102443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 418021, prev total WAL file size 418345, number of live WAL files 2. Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.102861) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373633' seq:72057594037927935, type:22 .. '6C6F676D0034303136' seq:0, type:0; will stop at (end) Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(402KB)], [36(18MB)] Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095102956, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 19827398, "oldest_snapshot_seqno": -1} Dec 5 05:04:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 317 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 127 op/s Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12028 keys, 19721424 bytes, temperature: kUnknown Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095213082, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 19721424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19649929, "index_size": 40354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 322437, "raw_average_key_size": 26, "raw_value_size": 19442197, "raw_average_value_size": 1616, "num_data_blocks": 1547, "num_entries": 12028, "num_filter_entries": 12028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.213936) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 19721424 bytes Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.216270) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.1 rd, 178.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 18.5 +0.0 blob) out(18.8 +0.0 blob), read-write-amplify(96.0) write-amplify(47.8) OK, records in: 12559, records dropped: 531 output_compression: NoCompression Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.216302) EVENT_LOG_v1 {"time_micros": 1764929095216289, "job": 20, "event": "compaction_finished", "compaction_time_micros": 110691, "compaction_time_cpu_micros": 36331, "output_level": 6, "num_output_files": 1, "total_output_size": 19721424, "num_input_records": 12559, "num_output_records": 12028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095216489, "job": 20, "event": "table_file_deletion", "file_number": 38} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929095219445, "job": 20, "event": "table_file_deletion", "file_number": 36} Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.102772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.219529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.219536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.219538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.219541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:55 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:04:55.219544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.247 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.247 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] VM Started (Lifecycle Event)#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.272 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.277 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.277 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] VM Paused (Lifecycle Event)#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.306 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.313 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.350 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 5 05:04:55 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:55.457 261902 INFO neutron.agent.linux.ip_lib [None req-295406e4-63b5-4e08-a56d-7167d962d41a - - - - - -] Device tap0a18e7f0-8a cannot be used as it has no MAC address#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.489 280232 DEBUG nova.compute.manager [req-6e0de7a4-d941-4ebd-96f2-65cc0622520b req-132a2e00-7eb6-4b29-a254-11b501b3c3c8 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.490 280232 DEBUG oslo_concurrency.lockutils [req-6e0de7a4-d941-4ebd-96f2-65cc0622520b req-132a2e00-7eb6-4b29-a254-11b501b3c3c8 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.490 280232 DEBUG oslo_concurrency.lockutils [req-6e0de7a4-d941-4ebd-96f2-65cc0622520b req-132a2e00-7eb6-4b29-a254-11b501b3c3c8 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.491 280232 DEBUG oslo_concurrency.lockutils [req-6e0de7a4-d941-4ebd-96f2-65cc0622520b req-132a2e00-7eb6-4b29-a254-11b501b3c3c8 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.492 280232 DEBUG nova.compute.manager [req-6e0de7a4-d941-4ebd-96f2-65cc0622520b req-132a2e00-7eb6-4b29-a254-11b501b3c3c8 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Processing event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.493 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost kernel: device tap0a18e7f0-8a entered promiscuous mode Dec 5 05:04:55 localhost NetworkManager[5960]: [1764929095.4966] manager: (tap0a18e7f0-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.496 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 5 05:04:55 localhost systemd-udevd[308560]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.498 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost ovn_controller[153000]: 2025-12-05T10:04:55Z|00097|binding|INFO|Claiming lport 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f for this chassis. Dec 5 05:04:55 localhost ovn_controller[153000]: 2025-12-05T10:04:55Z|00098|binding|INFO|0a18e7f0-8a64-414c-b78f-bf5d472c1d5f: Claiming unknown Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.509 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.509 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] VM Resumed (Lifecycle Event)#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.513 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.513 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-e15d083f-7984-4879-a88e-c9228d36c3fe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e15d083f-7984-4879-a88e-c9228d36c3fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4521a7c1-80de-477c-9559-e5b4faa7b6fd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0a18e7f0-8a64-414c-b78f-bf5d472c1d5f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.523 280232 INFO nova.virt.libvirt.driver [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Instance spawned successfully.#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.523 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.525 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.527 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:04:55 localhost ovn_controller[153000]: 2025-12-05T10:04:55Z|00099|binding|INFO|Setting lport 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f ovn-installed in OVS Dec 5 05:04:55 localhost ovn_controller[153000]: 2025-12-05T10:04:55Z|00100|binding|INFO|Setting lport 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f up in Southbound Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.530 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.537 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.549 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.549 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.550 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.550 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.551 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.551 280232 DEBUG nova.virt.libvirt.driver [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.563 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.566 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.620 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.630 280232 INFO nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Took 9.56 seconds to spawn the instance on the hypervisor.#033[00m Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.631 280232 DEBUG nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:04:55 localhost podman[308687]: Dec 5 05:04:55 localhost podman[308687]: 2025-12-05 10:04:55.662450553 +0000 UTC m=+0.109246779 container create 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.670 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:55 localhost podman[308687]: 2025-12-05 10:04:55.604818668 +0000 UTC m=+0.051614994 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 05:04:55 localhost systemd[1]: Started libpod-conmon-1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12.scope. Dec 5 05:04:55 localhost systemd[1]: Started libcrun container. Dec 5 05:04:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e6cf32151b7d14e9adf4ec899f780f48f1f0c971aec6cef601c989e0d1c1200/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.730 280232 INFO nova.compute.manager [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Took 10.49 seconds to build instance.#033[00m Dec 5 05:04:55 localhost podman[308687]: 2025-12-05 10:04:55.737289417 +0000 UTC m=+0.184085693 container init 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:04:55 localhost podman[308687]: 2025-12-05 10:04:55.745773137 +0000 UTC m=+0.192569373 container start 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:04:55 localhost nova_compute[280228]: 2025-12-05 10:04:55.748 280232 DEBUG oslo_concurrency.lockutils [None req-6f6b3a03-d3f1-498c-914d-2ed9526c5336 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 10.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:55 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [NOTICE] (308729) : New worker (308732) forked Dec 5 05:04:55 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [NOTICE] (308729) : Loading success. Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.813 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f in datapath e15d083f-7984-4879-a88e-c9228d36c3fe unbound from our chassis#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.818 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e15d083f-7984-4879-a88e-c9228d36c3fe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:04:55 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:55.820 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bd7bee-38a8-4d5e-bfc9-dcf312922cab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:04:55 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:04:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:04:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:04:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:04:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:04:55 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev d927baee-b5d8-4d37-b8a5-10a134986216 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:04:55 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev d927baee-b5d8-4d37-b8a5-10a134986216 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:04:55 localhost ceph-mgr[286454]: [progress INFO root] Completed event d927baee-b5d8-4d37-b8a5-10a134986216 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:04:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:04:55 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:04:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e94 do_prune osdmap full prune enabled Dec 5 05:04:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e95 e95: 6 total, 6 up, 6 in Dec 5 05:04:56 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e95: 6 total, 6 up, 6 in Dec 5 05:04:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:04:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:04:56 localhost nova_compute[280228]: 2025-12-05 10:04:56.650 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:56 localhost podman[308800]: Dec 5 05:04:56 localhost podman[308800]: 2025-12-05 10:04:56.71102026 +0000 UTC m=+0.083972054 container create 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:04:56 localhost systemd[1]: Started libpod-conmon-1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140.scope. Dec 5 05:04:56 localhost systemd[1]: Started libcrun container. Dec 5 05:04:56 localhost podman[308800]: 2025-12-05 10:04:56.672350006 +0000 UTC m=+0.045301850 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:04:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97063e085a0521f15bb4ef8e4ef1288f2f9fa2a53da1890acbd82e4a673dda3d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:04:56 localhost podman[308800]: 2025-12-05 10:04:56.794279232 +0000 UTC m=+0.167231056 container init 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:04:56 localhost podman[308800]: 2025-12-05 10:04:56.804602589 +0000 UTC m=+0.177554413 container start 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:04:56 localhost dnsmasq[308819]: started, version 2.85 cachesize 150 Dec 5 05:04:56 localhost dnsmasq[308819]: DNS service limited to local subnets Dec 5 05:04:56 localhost dnsmasq[308819]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:04:56 localhost dnsmasq[308819]: warning: no upstream servers configured Dec 5 05:04:56 localhost dnsmasq-dhcp[308819]: DHCP, static leases only on 19.80.0.0, lease time 1d Dec 5 05:04:56 localhost dnsmasq[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/addn_hosts - 0 addresses Dec 5 05:04:56 localhost dnsmasq-dhcp[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/host Dec 5 05:04:56 localhost dnsmasq-dhcp[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/opts Dec 5 05:04:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:57.019 261902 INFO neutron.agent.dhcp.agent [None req-41d561dd-a3b1-4e12-a024-a262023d13b0 - - - - - -] DHCP configuration for ports {'3128431f-a8bb-4804-8e0c-9de8f8a5235e'} is completed#033[00m Dec 5 05:04:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 397 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 377 op/s Dec 5 05:04:57 localhost openstack_network_exporter[241668]: ERROR 10:04:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:04:57 localhost openstack_network_exporter[241668]: ERROR 10:04:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:04:57 localhost openstack_network_exporter[241668]: ERROR 10:04:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:04:57 localhost openstack_network_exporter[241668]: ERROR 10:04:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:04:57 localhost openstack_network_exporter[241668]: Dec 5 05:04:57 localhost openstack_network_exporter[241668]: ERROR 10:04:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:04:57 localhost openstack_network_exporter[241668]: Dec 5 05:04:57 localhost neutron_sriov_agent[254996]: 2025-12-05 10:04:57.208 2 INFO neutron.agent.securitygroups_rpc [None req-e20708ac-e402-4240-afd4-18fd4cece83c 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']#033[00m Dec 5 05:04:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:57.281 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:56Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=63ba056b-fce6-4938-bf24-a421b305533d, ip_allocation=immediate, mac_address=fa:16:3e:c5:be:46, name=tempest-subport-832547800, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:04:52Z, description=, dns_domain=, id=e15d083f-7984-4879-a88e-c9228d36c3fe, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-639567759, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44757, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=508, status=ACTIVE, subnets=['685e7e05-1b6d-4992-ad91-f26487a5a87e'], tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:54Z, vlan_transparent=None, network_id=e15d083f-7984-4879-a88e-c9228d36c3fe, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['8c9500c3-6ac9-452e-a652-72bddc07be6d'], standard_attr_id=529, status=DOWN, tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, updated_at=2025-12-05T10:04:56Z on network e15d083f-7984-4879-a88e-c9228d36c3fe#033[00m Dec 5 05:04:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e95 do_prune osdmap full prune enabled Dec 5 05:04:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e96 e96: 6 total, 6 up, 6 in Dec 5 05:04:57 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e96: 6 total, 6 up, 6 in Dec 5 05:04:57 localhost dnsmasq[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/addn_hosts - 0 addresses Dec 5 05:04:57 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/host Dec 5 05:04:57 localhost dnsmasq-dhcp[307854]: read /var/lib/neutron/dhcp/99c8e076-e96f-4e1b-b373-8e23b3bbc3da/opts Dec 5 05:04:57 localhost podman[308837]: 2025-12-05 10:04:57.41937225 +0000 UTC m=+0.099914422 container kill ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 05:04:57 localhost dnsmasq[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/addn_hosts - 1 addresses Dec 5 05:04:57 localhost dnsmasq-dhcp[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/host Dec 5 05:04:57 localhost dnsmasq-dhcp[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/opts Dec 5 05:04:57 localhost podman[308867]: 2025-12-05 10:04:57.586166433 +0000 UTC m=+0.082153999 container kill 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.651 280232 DEBUG nova.compute.manager [req-9486df9d-47f6-42bd-88c7-05066d562ef9 req-1f9ae374-8635-468e-b4ac-2ec236ea63ea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.651 280232 DEBUG oslo_concurrency.lockutils [req-9486df9d-47f6-42bd-88c7-05066d562ef9 req-1f9ae374-8635-468e-b4ac-2ec236ea63ea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.651 280232 DEBUG oslo_concurrency.lockutils [req-9486df9d-47f6-42bd-88c7-05066d562ef9 req-1f9ae374-8635-468e-b4ac-2ec236ea63ea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.651 280232 DEBUG oslo_concurrency.lockutils [req-9486df9d-47f6-42bd-88c7-05066d562ef9 req-1f9ae374-8635-468e-b4ac-2ec236ea63ea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.652 280232 DEBUG nova.compute.manager [req-9486df9d-47f6-42bd-88c7-05066d562ef9 req-1f9ae374-8635-468e-b4ac-2ec236ea63ea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] No waiting events found dispatching network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.652 280232 WARNING nova.compute.manager [req-9486df9d-47f6-42bd-88c7-05066d562ef9 req-1f9ae374-8635-468e-b4ac-2ec236ea63ea c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received unexpected event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa for instance with vm_state active and task_state None.#033[00m Dec 5 05:04:57 localhost ovn_controller[153000]: 2025-12-05T10:04:57Z|00101|binding|INFO|Releasing lport 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 from this chassis (sb_readonly=0) Dec 5 05:04:57 localhost ovn_controller[153000]: 2025-12-05T10:04:57Z|00102|binding|INFO|Setting lport 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 down in Southbound Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.717 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:57 localhost kernel: device tap2b95cbbe-bd left promiscuous mode Dec 5 05:04:57 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:57.728 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-99c8e076-e96f-4e1b-b373-8e23b3bbc3da', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99c8e076-e96f-4e1b-b373-8e23b3bbc3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '38ca44ea29964cdc953c4acef5715d76', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c248ceac-4aac-40e0-9554-d26848fb6357, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:04:57 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:57.732 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 2b95cbbe-bd92-41b2-b6d9-4d4188a8e8d7 in datapath 99c8e076-e96f-4e1b-b373-8e23b3bbc3da unbound from our chassis#033[00m Dec 5 05:04:57 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:57.737 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99c8e076-e96f-4e1b-b373-8e23b3bbc3da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:04:57 localhost ovn_metadata_agent[158815]: 2025-12-05 10:04:57.738 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b3c21f-2e9b-4a56-99c7-f3fd89262c39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:04:57 localhost nova_compute[280228]: 2025-12-05 10:04:57.744 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:04:57.886 261902 INFO neutron.agent.dhcp.agent [None req-b3199fc5-9729-43c4-9e85-7965148d900f - - - - - -] DHCP configuration for ports {'63ba056b-fce6-4938-bf24-a421b305533d'} is completed#033[00m Dec 5 05:04:58 localhost nova_compute[280228]: 2025-12-05 10:04:58.607 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:04:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 397 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 290 op/s Dec 5 05:05:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:00 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:05:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:05:00 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:05:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:05:00 localhost nova_compute[280228]: 2025-12-05 10:05:00.827 280232 DEBUG nova.compute.manager [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-changed-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:00 localhost nova_compute[280228]: 2025-12-05 10:05:00.828 280232 DEBUG nova.compute.manager [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Refreshing instance network info cache due to event network-changed-24f19dd4-108e-4a77-b44d-59a215801baa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 5 05:05:00 localhost nova_compute[280228]: 2025-12-05 10:05:00.828 280232 DEBUG oslo_concurrency.lockutils [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:00 localhost nova_compute[280228]: 2025-12-05 10:05:00.828 280232 DEBUG oslo_concurrency.lockutils [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquired lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:00 localhost nova_compute[280228]: 2025-12-05 10:05:00.829 280232 DEBUG nova.network.neutron [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Refreshing network info cache for port 24f19dd4-108e-4a77-b44d-59a215801baa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 5 05:05:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 341 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 6.9 MiB/s wr, 329 op/s Dec 5 05:05:01 localhost nova_compute[280228]: 2025-12-05 10:05:01.654 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:02 localhost nova_compute[280228]: 2025-12-05 10:05:02.263 280232 DEBUG nova.network.neutron [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Updated VIF entry in instance network info cache for port 24f19dd4-108e-4a77-b44d-59a215801baa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 5 05:05:02 localhost nova_compute[280228]: 2025-12-05 10:05:02.263 280232 DEBUG nova.network.neutron [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Updating instance_info_cache with network_info: [{"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:02 localhost nova_compute[280228]: 2025-12-05 10:05:02.291 280232 DEBUG oslo_concurrency.lockutils [req-c4b23ee1-754b-44e6-a14f-ca75612e25b3 req-0ea3b0e1-700c-4202-9be9-3c8d9d7c5930 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Releasing lock "refresh_cache-fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:02 localhost ovn_controller[153000]: 2025-12-05T10:05:02Z|00103|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:02 localhost ovn_controller[153000]: 2025-12-05T10:05:02Z|00104|binding|INFO|Releasing lport 8dc3951d-9e6b-4dd7-9953-6042801ec206 from this chassis (sb_readonly=0) Dec 5 05:05:02 localhost nova_compute[280228]: 2025-12-05 10:05:02.973 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 317 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 399 op/s Dec 5 05:05:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e96 do_prune osdmap full prune enabled Dec 5 05:05:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e97 e97: 6 total, 6 up, 6 in Dec 5 05:05:03 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e97: 6 total, 6 up, 6 in Dec 5 05:05:03 localhost nova_compute[280228]: 2025-12-05 10:05:03.609 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:03 localhost dnsmasq[307854]: exiting on receipt of SIGTERM Dec 5 05:05:03 localhost podman[308914]: 2025-12-05 10:05:03.73417501 +0000 UTC m=+0.078264089 container kill ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:05:03 localhost systemd[1]: libpod-ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130.scope: Deactivated successfully. Dec 5 05:05:03 localhost podman[308927]: 2025-12-05 10:05:03.810117458 +0000 UTC m=+0.059065271 container died ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:05:03 localhost podman[308927]: 2025-12-05 10:05:03.859068678 +0000 UTC m=+0.108016411 container cleanup ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:03 localhost systemd[1]: libpod-conmon-ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130.scope: Deactivated successfully. Dec 5 05:05:03 localhost podman[308929]: 2025-12-05 10:05:03.906287705 +0000 UTC m=+0.145703426 container remove ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99c8e076-e96f-4e1b-b373-8e23b3bbc3da, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:05:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:03.911 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:03.912 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:03.912 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:03 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:03.976 261902 INFO neutron.agent.dhcp.agent [None req-a4444bc1-8f08-4b50-8ab2-ea81cf794edb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:04 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:04.099 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:04 localhost systemd[1]: tmp-crun.h2IRUP.mount: Deactivated successfully. Dec 5 05:05:04 localhost systemd[1]: var-lib-containers-storage-overlay-579ad33473e355a80ccf53a2b9bc88d1d123512e62e0577480d5c2ab3d5fc065-merged.mount: Deactivated successfully. Dec 5 05:05:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab2f396c852736eaff1ccf1197deedfb8e83a8759c5b6a72bee03d0ef804c130-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:04 localhost systemd[1]: run-netns-qdhcp\x2d99c8e076\x2de96f\x2d4e1b\x2db373\x2d8e23b3bbc3da.mount: Deactivated successfully. Dec 5 05:05:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e97 do_prune osdmap full prune enabled Dec 5 05:05:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e98 e98: 6 total, 6 up, 6 in Dec 5 05:05:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 317 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 2.6 KiB/s wr, 186 op/s Dec 5 05:05:05 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e98: 6 total, 6 up, 6 in Dec 5 05:05:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:05.847 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005546421.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:51Z, description=, device_id=d26c2a55-022b-4361-bccb-625a3a43e6f8, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-livemigrationtest-server-755398326, extra_dhcp_opts=[], fixed_ips=[], id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6, ip_allocation=immediate, mac_address=fa:16:3e:57:e9:2f, name=tempest-parent-1901059839, network_id=b18d5894-62f7-4f8f-a24c-429b8805e981, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['8c9500c3-6ac9-452e-a652-72bddc07be6d'], standard_attr_id=496, status=DOWN, tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, trunk_details=sub_ports=[], trunk_id=0c047c96-814b-4f70-a17a-8978d479cbd6, updated_at=2025-12-05T10:05:05Z on network b18d5894-62f7-4f8f-a24c-429b8805e981#033[00m Dec 5 05:05:06 localhost podman[308973]: 2025-12-05 10:05:06.072660243 +0000 UTC m=+0.040377958 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:06 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 2 addresses Dec 5 05:05:06 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:05:06 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:05:06 localhost podman[308987]: 2025-12-05 10:05:06.225952319 +0000 UTC m=+0.132817359 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:05:06 localhost podman[308989]: 2025-12-05 10:05:06.178537966 +0000 UTC m=+0.080787837 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:05:06 localhost podman[308990]: 2025-12-05 10:05:06.202260434 +0000 UTC m=+0.100291284 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:06.257 261902 INFO neutron.agent.dhcp.agent [None req-766e63a3-82e5-46ab-ba08-7b13ca2928d9 - - - - - -] DHCP configuration for ports {'63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6'} is completed#033[00m Dec 5 05:05:06 localhost podman[308987]: 2025-12-05 10:05:06.262057286 +0000 UTC m=+0.168922336 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:05:06 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:05:06 localhost podman[308990]: 2025-12-05 10:05:06.288679661 +0000 UTC m=+0.186710501 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 05:05:06 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:05:06 localhost podman[308989]: 2025-12-05 10:05:06.314825403 +0000 UTC m=+0.217075283 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Dec 5 05:05:06 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:05:06 localhost nova_compute[280228]: 2025-12-05 10:05:06.657 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 456 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 9.5 MiB/s rd, 11 MiB/s wr, 385 op/s Dec 5 05:05:08 localhost nova_compute[280228]: 2025-12-05 10:05:08.610 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 456 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 11 MiB/s wr, 321 op/s Dec 5 05:05:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e98 do_prune osdmap full prune enabled Dec 5 05:05:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e99 e99: 6 total, 6 up, 6 in Dec 5 05:05:09 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e99: 6 total, 6 up, 6 in Dec 5 05:05:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e99 do_prune osdmap full prune enabled Dec 5 05:05:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e100 e100: 6 total, 6 up, 6 in Dec 5 05:05:10 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e100: 6 total, 6 up, 6 in Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.120868) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110120923, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 522, "num_deletes": 252, "total_data_size": 342874, "memory_usage": 351768, "flush_reason": "Manual Compaction"} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110126608, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 319939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23948, "largest_seqno": 24469, "table_properties": {"data_size": 317151, "index_size": 771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7802, "raw_average_key_size": 21, "raw_value_size": 311224, "raw_average_value_size": 848, "num_data_blocks": 34, "num_entries": 367, "num_filter_entries": 367, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929095, "oldest_key_time": 1764929095, "file_creation_time": 1764929110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 5784 microseconds, and 1885 cpu microseconds. Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.126656) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 319939 bytes OK Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.126676) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.128589) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.128612) EVENT_LOG_v1 {"time_micros": 1764929110128606, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.128632) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 339796, prev total WAL file size 339796, number of live WAL files 2. Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.129234) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373537' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end) Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(312KB)], [39(18MB)] Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110129460, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 20041363, "oldest_snapshot_seqno": -1} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 11874 keys, 17829793 bytes, temperature: kUnknown Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110223626, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 17829793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17763867, "index_size": 35169, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 319577, "raw_average_key_size": 26, "raw_value_size": 17563273, "raw_average_value_size": 1479, "num_data_blocks": 1330, "num_entries": 11874, "num_filter_entries": 11874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.224022) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 17829793 bytes Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.227403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.6 rd, 189.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.8 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(118.4) write-amplify(55.7) OK, records in: 12395, records dropped: 521 output_compression: NoCompression Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.227554) EVENT_LOG_v1 {"time_micros": 1764929110227537, "job": 22, "event": "compaction_finished", "compaction_time_micros": 94275, "compaction_time_cpu_micros": 42870, "output_level": 6, "num_output_files": 1, "total_output_size": 17829793, "num_input_records": 12395, "num_output_records": 11874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110227812, "job": 22, "event": "table_file_deletion", "file_number": 41} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929110231187, "job": 22, "event": "table_file_deletion", "file_number": 39} Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.129057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.231301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.231314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.231319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.231323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:05:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:05:10.231327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:05:10 localhost ovn_controller[153000]: 2025-12-05T10:05:10Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:5d:78 10.100.0.9 Dec 5 05:05:10 localhost ovn_controller[153000]: 2025-12-05T10:05:10Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:5d:78 10.100.0.9 Dec 5 05:05:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 441 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 18 MiB/s wr, 592 op/s Dec 5 05:05:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:11.482 261902 INFO neutron.agent.linux.ip_lib [None req-3f5c2bf9-01f2-476e-9710-d189b9e831fc - - - - - -] Device tap7bdde4bd-39 cannot be used as it has no MAC address#033[00m Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.506 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.509 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:11 localhost kernel: device tap7bdde4bd-39 entered promiscuous mode Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.513 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:11 localhost NetworkManager[5960]: [1764929111.5140] manager: (tap7bdde4bd-39): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Dec 5 05:05:11 localhost ovn_controller[153000]: 2025-12-05T10:05:11Z|00105|binding|INFO|Claiming lport 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc for this chassis. Dec 5 05:05:11 localhost ovn_controller[153000]: 2025-12-05T10:05:11Z|00106|binding|INFO|7bdde4bd-39aa-4d26-ab23-a58e26abd8bc: Claiming unknown Dec 5 05:05:11 localhost systemd-udevd[309060]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:05:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:11.532 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-161caaa5-5324-49ef-bee7-c70abf729b34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-161caaa5-5324-49ef-bee7-c70abf729b34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70b1a86dcef64e6ebbe96bd40de85c61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c15b2407-0871-4635-90fa-4e4e349e8ccd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7bdde4bd-39aa-4d26-ab23-a58e26abd8bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:11.536 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc in datapath 161caaa5-5324-49ef-bee7-c70abf729b34 bound to our chassis#033[00m Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:11.542 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 008fc095-570d-4d33-a141-3cd7a7cffeb2 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:05:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:11.543 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 161caaa5-5324-49ef-bee7-c70abf729b34, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:11.544 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[138d68c8-ab5f-49dd-a2e1-2591ccb04cb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost ovn_controller[153000]: 2025-12-05T10:05:11Z|00107|binding|INFO|Setting lport 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc ovn-installed in OVS Dec 5 05:05:11 localhost ovn_controller[153000]: 2025-12-05T10:05:11Z|00108|binding|INFO|Setting lport 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc up in Southbound Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.551 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost journal[228791]: ethtool ioctl error on tap7bdde4bd-39: No such device Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.585 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.615 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:11 localhost nova_compute[280228]: 2025-12-05 10:05:11.660 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:12 localhost podman[309131]: Dec 5 05:05:12 localhost podman[309131]: 2025-12-05 10:05:12.481352258 +0000 UTC m=+0.088004278 container create 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 5 05:05:12 localhost nova_compute[280228]: 2025-12-05 10:05:12.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:12 localhost nova_compute[280228]: 2025-12-05 10:05:12.509 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:05:12 localhost nova_compute[280228]: 2025-12-05 10:05:12.509 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:05:12 localhost podman[309131]: 2025-12-05 10:05:12.434053888 +0000 UTC m=+0.040705938 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:05:12 localhost systemd[1]: Started libpod-conmon-3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b.scope. Dec 5 05:05:12 localhost systemd[1]: tmp-crun.nZ7juk.mount: Deactivated successfully. Dec 5 05:05:12 localhost systemd[1]: Started libcrun container. Dec 5 05:05:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16b8bc2891c944a106b96b112a0ace6d0589c94208d18981c1cdf482a771bdd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:05:12 localhost podman[309131]: 2025-12-05 10:05:12.573706498 +0000 UTC m=+0.180358518 container init 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:05:12 localhost podman[309131]: 2025-12-05 10:05:12.585658714 +0000 UTC m=+0.192310734 container start 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:12 localhost dnsmasq[309149]: started, version 2.85 cachesize 150 Dec 5 05:05:12 localhost dnsmasq[309149]: DNS service limited to local subnets Dec 5 05:05:12 localhost dnsmasq[309149]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:05:12 localhost dnsmasq[309149]: warning: no upstream servers configured Dec 5 05:05:12 localhost dnsmasq-dhcp[309149]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:05:12 localhost dnsmasq[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/addn_hosts - 0 addresses Dec 5 05:05:12 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/host Dec 5 05:05:12 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/opts Dec 5 05:05:12 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:12.869 261902 INFO neutron.agent.dhcp.agent [None req-5e3b8da9-b74f-46b2-8f2a-fac734cab794 - - - - - -] DHCP configuration for ports {'0ad314da-2d37-4cca-8876-32133933a08d'} is completed#033[00m Dec 5 05:05:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 430 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 8.8 MiB/s rd, 15 MiB/s wr, 514 op/s Dec 5 05:05:13 localhost nova_compute[280228]: 2025-12-05 10:05:13.287 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:13 localhost nova_compute[280228]: 2025-12-05 10:05:13.287 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:13 localhost nova_compute[280228]: 2025-12-05 10:05:13.288 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:05:13 localhost nova_compute[280228]: 2025-12-05 10:05:13.288 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:05:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:05:13 localhost podman[309151]: 2025-12-05 10:05:13.466636486 +0000 UTC m=+0.089479824 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:05:13 localhost podman[309151]: 2025-12-05 10:05:13.473541367 +0000 UTC m=+0.096384685 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:05:13 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:05:13 localhost nova_compute[280228]: 2025-12-05 10:05:13.598 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:13 localhost podman[309150]: 2025-12-05 10:05:13.610766043 +0000 UTC m=+0.236643924 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:05:13 localhost nova_compute[280228]: 2025-12-05 10:05:13.613 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:13 localhost podman[309150]: 2025-12-05 10:05:13.636581794 +0000 UTC m=+0.262459695 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible) Dec 5 05:05:13 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.063 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.082 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.082 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.083 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.118 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.118 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.119 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.120 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.120 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:05:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/414429719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.578 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.670 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.671 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.676 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.677 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.924 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.926 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11093MB free_disk=41.36555099487305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.926 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:14 localhost nova_compute[280228]: 2025-12-05 10:05:14.927 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.047 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.048 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.048 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.049 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1152MB phys_disk=41GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:05:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e100 do_prune osdmap full prune enabled Dec 5 05:05:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e101 e101: 6 total, 6 up, 6 in Dec 5 05:05:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 430 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 412 op/s Dec 5 05:05:15 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e101: 6 total, 6 up, 6 in Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.129 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:05:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:05:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:05:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:05:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:05:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:05:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:05:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/259079381' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.532 280232 DEBUG nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Creating tmpfile /var/lib/nova/instances/tmp6k6e96wr to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.537 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.545 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.550 280232 DEBUG nova.compute.manager [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6k6e96wr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.578 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.578 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.874 280232 INFO nova.compute.rpcapi [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.875 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.882 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.917 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:05:15 localhost nova_compute[280228]: 2025-12-05 10:05:15.918 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:16 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:16.060 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:05:15Z, description=, device_id=3f77c451-9a02-424e-8c8b-32735ce06005, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=525bec01-24e3-4487-b818-d56cd05e80cf, ip_allocation=immediate, mac_address=fa:16:3e:8d:d4:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:05:08Z, description=, dns_domain=, id=161caaa5-5324-49ef-bee7-c70abf729b34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-755204920-network, port_security_enabled=True, project_id=70b1a86dcef64e6ebbe96bd40de85c61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50512, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=566, status=ACTIVE, subnets=['2d71e97e-fcd5-400f-84e2-7b3f8bafad67'], tags=[], tenant_id=70b1a86dcef64e6ebbe96bd40de85c61, updated_at=2025-12-05T10:05:09Z, vlan_transparent=None, network_id=161caaa5-5324-49ef-bee7-c70abf729b34, port_security_enabled=False, project_id=70b1a86dcef64e6ebbe96bd40de85c61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=596, status=DOWN, tags=[], tenant_id=70b1a86dcef64e6ebbe96bd40de85c61, updated_at=2025-12-05T10:05:15Z on network 161caaa5-5324-49ef-bee7-c70abf729b34#033[00m Dec 5 05:05:16 localhost dnsmasq[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/addn_hosts - 1 addresses Dec 5 05:05:16 localhost podman[309259]: 2025-12-05 10:05:16.280398393 +0000 UTC m=+0.051030935 container kill 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:05:16 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/host Dec 5 05:05:16 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/opts Dec 5 05:05:16 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:16.555 261902 INFO neutron.agent.dhcp.agent [None req-08838382-0aa4-457f-ae9a-0d9f9e6cf967 - - - - - -] DHCP configuration for ports {'525bec01-24e3-4487-b818-d56cd05e80cf'} is completed#033[00m Dec 5 05:05:16 localhost nova_compute[280228]: 2025-12-05 10:05:16.664 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:16 localhost nova_compute[280228]: 2025-12-05 10:05:16.762 280232 DEBUG nova.compute.manager [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6k6e96wr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d26c2a55-022b-4361-bccb-625a3a43e6f8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Dec 5 05:05:16 localhost nova_compute[280228]: 2025-12-05 10:05:16.836 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquiring lock "refresh_cache-d26c2a55-022b-4361-bccb-625a3a43e6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:16 localhost nova_compute[280228]: 2025-12-05 10:05:16.836 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquired lock "refresh_cache-d26c2a55-022b-4361-bccb-625a3a43e6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:16 localhost nova_compute[280228]: 2025-12-05 10:05:16.837 280232 DEBUG nova.network.neutron [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 5 05:05:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.9 MiB/s rd, 4.3 MiB/s wr, 489 op/s Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.507 280232 DEBUG nova.network.neutron [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Updating instance_info_cache with network_info: [{"id": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "address": "fa:16:3e:57:e9:2f", "network": {"id": "b18d5894-62f7-4f8f-a24c-429b8805e981", "bridge": "br-int", "label": "tempest-LiveMigrationTest-410917755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a9b8ae2ff8fc42959dc64d209d5490df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a3a8bc-c4", "ovs_interfaceid": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.536 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Releasing lock "refresh_cache-d26c2a55-022b-4361-bccb-625a3a43e6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.540 280232 DEBUG nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6k6e96wr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d26c2a55-022b-4361-bccb-625a3a43e6f8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.542 280232 DEBUG nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Creating instance directory: /var/lib/nova/instances/d26c2a55-022b-4361-bccb-625a3a43e6f8 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.542 280232 DEBUG nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Ensure instance console log exists: /var/lib/nova/instances/d26c2a55-022b-4361-bccb-625a3a43e6f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.543 280232 DEBUG nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.546 280232 DEBUG nova.virt.libvirt.vif [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T10:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-755398326',display_name='tempest-LiveMigrationTest-server-755398326',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005546421.localdomain',hostname='tempest-livemigrationtest-server-755398326',id=9,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T10:05:10Z,launched_on='np0005546421.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005546421.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a9b8ae2ff8fc42959dc64d209d5490df',ramdisk_id='',reservation_id='r-k68tro11',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1653272297',owner_user_name='tempest-LiveMigrationTest-1653272297-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-05T10:05:10Z,user_data=None,user_id='7dbd84753cc34311a16ba30887be4b38',uuid=d26c2a55-022b-4361-bccb-625a3a43e6f8,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "address": "fa:16:3e:57:e9:2f", "network": {"id": "b18d5894-62f7-4f8f-a24c-429b8805e981", "bridge": "br-int", "label": "tempest-LiveMigrationTest-410917755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a9b8ae2ff8fc42959dc64d209d5490df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap63a3a8bc-c4", "ovs_interfaceid": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.547 280232 DEBUG nova.network.os_vif_util [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Converting VIF {"id": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "address": "fa:16:3e:57:e9:2f", "network": {"id": "b18d5894-62f7-4f8f-a24c-429b8805e981", "bridge": "br-int", "label": "tempest-LiveMigrationTest-410917755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a9b8ae2ff8fc42959dc64d209d5490df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap63a3a8bc-c4", "ovs_interfaceid": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.548 280232 DEBUG nova.network.os_vif_util [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:e9:2f,bridge_name='br-int',has_traffic_filtering=True,id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6,network=Network(b18d5894-62f7-4f8f-a24c-429b8805e981),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap63a3a8bc-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.549 280232 DEBUG os_vif [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:e9:2f,bridge_name='br-int',has_traffic_filtering=True,id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6,network=Network(b18d5894-62f7-4f8f-a24c-429b8805e981),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap63a3a8bc-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.550 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.551 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.552 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.558 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.558 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63a3a8bc-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.559 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63a3a8bc-c4, col_values=(('external_ids', {'iface-id': '63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:e9:2f', 'vm-uuid': 'd26c2a55-022b-4361-bccb-625a3a43e6f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.596 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.601 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.605 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.606 280232 INFO os_vif [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:e9:2f,bridge_name='br-int',has_traffic_filtering=True,id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6,network=Network(b18d5894-62f7-4f8f-a24c-429b8805e981),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap63a3a8bc-c4')#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.608 280232 DEBUG nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.608 280232 DEBUG nova.compute.manager [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6k6e96wr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d26c2a55-022b-4361-bccb-625a3a43e6f8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Dec 5 05:05:17 localhost nova_compute[280228]: 2025-12-05 10:05:17.879 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.343 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.344 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.369 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.370 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.371 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.371 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.372 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:05:18 localhost nova_compute[280228]: 2025-12-05 10:05:18.509 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:05:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.1 MiB/s rd, 3.8 MiB/s wr, 429 op/s Dec 5 05:05:19 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:19.569 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:05:15Z, description=, device_id=3f77c451-9a02-424e-8c8b-32735ce06005, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=525bec01-24e3-4487-b818-d56cd05e80cf, ip_allocation=immediate, mac_address=fa:16:3e:8d:d4:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:05:08Z, description=, dns_domain=, id=161caaa5-5324-49ef-bee7-c70abf729b34, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-755204920-network, port_security_enabled=True, project_id=70b1a86dcef64e6ebbe96bd40de85c61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50512, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=566, status=ACTIVE, subnets=['2d71e97e-fcd5-400f-84e2-7b3f8bafad67'], tags=[], tenant_id=70b1a86dcef64e6ebbe96bd40de85c61, updated_at=2025-12-05T10:05:09Z, vlan_transparent=None, network_id=161caaa5-5324-49ef-bee7-c70abf729b34, port_security_enabled=False, project_id=70b1a86dcef64e6ebbe96bd40de85c61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=596, status=DOWN, tags=[], tenant_id=70b1a86dcef64e6ebbe96bd40de85c61, updated_at=2025-12-05T10:05:15Z on network 161caaa5-5324-49ef-bee7-c70abf729b34#033[00m Dec 5 05:05:19 localhost nova_compute[280228]: 2025-12-05 10:05:19.775 280232 DEBUG nova.network.neutron [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Port 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 updated with migration profile {'migrating_to': 'np0005546419.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Dec 5 05:05:19 localhost nova_compute[280228]: 2025-12-05 10:05:19.778 280232 DEBUG nova.compute.manager [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6k6e96wr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d26c2a55-022b-4361-bccb-625a3a43e6f8',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Dec 5 05:05:19 localhost dnsmasq[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/addn_hosts - 1 addresses Dec 5 05:05:19 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/host Dec 5 05:05:19 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/opts Dec 5 05:05:19 localhost podman[309297]: 2025-12-05 10:05:19.794117312 +0000 UTC m=+0.065206679 container kill 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:05:19 localhost podman[239519]: time="2025-12-05T10:05:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:05:19 localhost podman[239519]: @ - - [05/Dec/2025:10:05:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 162767 "" "Go-http-client/1.1" Dec 5 05:05:19 localhost podman[239519]: @ - - [05/Dec/2025:10:05:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21172 "" "Go-http-client/1.1" Dec 5 05:05:20 localhost sshd[309318]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:05:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:20 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:20.136 261902 INFO neutron.agent.dhcp.agent [None req-30e9199f-bda0-48c2-b6ac-91f0f5d70004 - - - - - -] DHCP configuration for ports {'525bec01-24e3-4487-b818-d56cd05e80cf'} is completed#033[00m Dec 5 05:05:20 localhost systemd-logind[760]: New session 73 of user nova. Dec 5 05:05:20 localhost systemd[1]: Created slice User Slice of UID 42436. Dec 5 05:05:20 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Dec 5 05:05:20 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Dec 5 05:05:20 localhost systemd[1]: Starting User Manager for UID 42436... Dec 5 05:05:20 localhost systemd[309322]: Queued start job for default target Main User Target. Dec 5 05:05:20 localhost systemd[309322]: Created slice User Application Slice. Dec 5 05:05:20 localhost systemd[309322]: Started Mark boot as successful after the user session has run 2 minutes. Dec 5 05:05:20 localhost systemd[309322]: Started Daily Cleanup of User's Temporary Directories. Dec 5 05:05:20 localhost systemd[309322]: Reached target Paths. Dec 5 05:05:20 localhost systemd[309322]: Reached target Timers. Dec 5 05:05:20 localhost systemd[309322]: Starting D-Bus User Message Bus Socket... Dec 5 05:05:20 localhost systemd[309322]: Starting Create User's Volatile Files and Directories... Dec 5 05:05:20 localhost systemd[309322]: Finished Create User's Volatile Files and Directories. Dec 5 05:05:20 localhost systemd[309322]: Listening on D-Bus User Message Bus Socket. Dec 5 05:05:20 localhost systemd[309322]: Reached target Sockets. Dec 5 05:05:20 localhost systemd[309322]: Reached target Basic System. Dec 5 05:05:20 localhost systemd[309322]: Reached target Main User Target. Dec 5 05:05:20 localhost systemd[309322]: Startup finished in 157ms. Dec 5 05:05:20 localhost systemd[1]: Started User Manager for UID 42436. Dec 5 05:05:20 localhost systemd[1]: Started Session 73 of User nova. Dec 5 05:05:20 localhost kernel: device tap63a3a8bc-c4 entered promiscuous mode Dec 5 05:05:20 localhost NetworkManager[5960]: [1764929120.6653] manager: (tap63a3a8bc-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/26) Dec 5 05:05:20 localhost ovn_controller[153000]: 2025-12-05T10:05:20Z|00109|binding|INFO|Claiming lport 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 for this additional chassis. Dec 5 05:05:20 localhost ovn_controller[153000]: 2025-12-05T10:05:20Z|00110|binding|INFO|63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6: Claiming fa:16:3e:57:e9:2f 10.100.0.11 Dec 5 05:05:20 localhost ovn_controller[153000]: 2025-12-05T10:05:20Z|00111|binding|INFO|Claiming lport 63ba056b-fce6-4938-bf24-a421b305533d for this additional chassis. Dec 5 05:05:20 localhost ovn_controller[153000]: 2025-12-05T10:05:20Z|00112|binding|INFO|63ba056b-fce6-4938-bf24-a421b305533d: Claiming fa:16:3e:c5:be:46 19.80.0.128 Dec 5 05:05:20 localhost nova_compute[280228]: 2025-12-05 10:05:20.671 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:20 localhost systemd-udevd[309352]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:05:20 localhost NetworkManager[5960]: [1764929120.7015] device (tap63a3a8bc-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 5 05:05:20 localhost NetworkManager[5960]: [1764929120.7021] device (tap63a3a8bc-c4): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 5 05:05:20 localhost systemd-machined[83348]: New machine qemu-4-instance-00000009. Dec 5 05:05:20 localhost nova_compute[280228]: 2025-12-05 10:05:20.705 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:20 localhost ovn_controller[153000]: 2025-12-05T10:05:20Z|00113|binding|INFO|Setting lport 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 ovn-installed in OVS Dec 5 05:05:20 localhost nova_compute[280228]: 2025-12-05 10:05:20.711 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:20 localhost nova_compute[280228]: 2025-12-05 10:05:20.712 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:20 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000009. Dec 5 05:05:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:20.723 2 INFO neutron.agent.securitygroups_rpc [None req-3b08e34e-be59-42c6-8b30-fbeb7d168f39 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']#033[00m Dec 5 05:05:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:20.992 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:20.993 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:05:20 localhost nova_compute[280228]: 2025-12-05 10:05:20.998 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.103 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.104 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] VM Started (Lifecycle Event)#033[00m Dec 5 05:05:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v121: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 194 op/s Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.125 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.649 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.652 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] VM Resumed (Lifecycle Event)#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.669 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.686 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.689 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 05:05:21 localhost nova_compute[280228]: 2025-12-05 10:05:21.737 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] During the sync_power process the instance has moved from host np0005546421.localdomain to host np0005546419.localdomain#033[00m Dec 5 05:05:21 localhost systemd[1]: session-73.scope: Deactivated successfully. Dec 5 05:05:21 localhost systemd-logind[760]: Session 73 logged out. Waiting for processes to exit. Dec 5 05:05:21 localhost systemd-logind[760]: Removed session 73. Dec 5 05:05:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:22.066 2 INFO neutron.agent.securitygroups_rpc [None req-ca74205a-80b5-4c1e-a1c5-b7623a704ab5 21c29f3a56e54486b61ecc72cb35cc3e 1b63f7777dfa40c1bfc42162c9fd676f - - default default] Security group member updated ['d4162554-7d79-4103-bc2a-c014e86c3743']#033[00m Dec 5 05:05:22 localhost nova_compute[280228]: 2025-12-05 10:05:22.629 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.2 MiB/s rd, 31 KiB/s wr, 144 op/s Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00114|binding|INFO|Claiming lport 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 for this chassis. Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00115|binding|INFO|63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6: Claiming fa:16:3e:57:e9:2f 10.100.0.11 Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00116|binding|INFO|Claiming lport 63ba056b-fce6-4938-bf24-a421b305533d for this chassis. Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00117|binding|INFO|63ba056b-fce6-4938-bf24-a421b305533d: Claiming fa:16:3e:c5:be:46 19.80.0.128 Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00118|binding|INFO|Setting lport 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 up in Southbound Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00119|binding|INFO|Setting lport 63ba056b-fce6-4938-bf24-a421b305533d up in Southbound Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.209 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:e9:2f 10.100.0.11'], port_security=['fa:16:3e:57:e9:2f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1901059839', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd26c2a55-022b-4361-bccb-625a3a43e6f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18d5894-62f7-4f8f-a24c-429b8805e981', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1901059839', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '10', 'neutron:security_group_ids': '8c9500c3-6ac9-452e-a652-72bddc07be6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546421.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2976951b-7527-4195-bee8-6ac5e692e095, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.217 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:be:46 19.80.0.128'], port_security=['fa:16:3e:c5:be:46 19.80.0.128'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-832547800', 'neutron:cidrs': '19.80.0.128/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e15d083f-7984-4879-a88e-c9228d36c3fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-832547800', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '8c9500c3-6ac9-452e-a652-72bddc07be6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=4521a7c1-80de-477c-9559-e5b4faa7b6fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=63ba056b-fce6-4938-bf24-a421b305533d) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.219 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 in datapath b18d5894-62f7-4f8f-a24c-429b8805e981 bound to our chassis#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.224 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2d4c9e21-89e4-41d8-b63a-56cc3b856e34 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.225 158820 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b18d5894-62f7-4f8f-a24c-429b8805e981#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.238 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2ffff3-6aed-42b7-9dda-e041b44db9b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.240 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb18d5894-61 in ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.242 158926 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb18d5894-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.242 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[055837e3-8cd3-4791-819c-dafdae53e5a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.244 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[e77c3244-07d7-4415-9fe3-5f9a6d0de9b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.257 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[21fb5e09-ab3f-4d54-946d-2cbfb04dba94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.273 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[faf9fbad-dd20-41e1-aaac-207c47e59488]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.306 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2d74ac-d84a-4387-a406-36d660b18951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost NetworkManager[5960]: [1764929123.3154] manager: (tapb18d5894-60): new Veth device (/org/freedesktop/NetworkManager/Devices/27) Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.314 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[df14049e-63a1-401e-8d33-c9add4237ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.349 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2e2a30-7ca2-4714-a81c-0a94144cb19b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.353 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[d434b615-2534-4248-9d31-193205fc1d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:23.357 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-59a5be77-4ce7-41c3-91ac-b1875e4407bd req-3ed3568d-7b4e-4a38-b5d4-9ae83ebe3681 1789cb75c8f44c539966d084bb392cb1 d98f9ffaeb7346169078ece01c85312d - - default default] This port is not SRIOV, skip binding for port 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6.#033[00m Dec 5 05:05:23 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapb18d5894-61: link becomes ready Dec 5 05:05:23 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapb18d5894-60: link becomes ready Dec 5 05:05:23 localhost NetworkManager[5960]: [1764929123.3908] device (tapb18d5894-60): carrier: link connected Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.396 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[177bbb06-31f0-4fe9-929c-4aa4e00ef27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.419 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee2c279-0f60-4902-961c-beb5374e0aff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb18d5894-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:c2:20:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1223849, 'reachable_time': 36440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309452, 'error': None, 'target': 'ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.432 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[429f2ccf-9ccc-4e55-a190-a7d89dc4404b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:2015'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1223849, 'tstamp': 1223849}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309453, 'error': None, 'target': 'ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost systemd[1]: tmp-crun.FBUDd4.mount: Deactivated successfully. Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.455 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[553475ef-2e4d-473f-9a44-cde2f7b292b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb18d5894-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:c2:20:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1223849, 'reachable_time': 36440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309457, 'error': None, 'target': 'ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost podman[309429]: 2025-12-05 10:05:23.457529571 +0000 UTC m=+0.098385327 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.482 280232 INFO nova.compute.manager [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Post operation of migration started#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.527 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[df297922-d864-4e44-93c0-29b529714a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost podman[309429]: 2025-12-05 10:05:23.530648191 +0000 UTC m=+0.171503977 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:05:23 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:05:23 localhost podman[309430]: 2025-12-05 10:05:23.545827147 +0000 UTC m=+0.182433073 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, distribution-scope=public) Dec 5 05:05:23 localhost podman[309430]: 2025-12-05 10:05:23.560562489 +0000 UTC m=+0.197168435 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal) Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.570 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[46bc1c73-2300-4646-9b21-82b6e6e9a9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.572 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18d5894-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.572 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.573 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb18d5894-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.574 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:23 localhost kernel: device tapb18d5894-60 entered promiscuous mode Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.577 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb18d5894-60, col_values=(('external_ids', {'iface-id': 'e51b6bce-ae77-4f5a-a75d-40a16f97a07f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:23 localhost ovn_controller[153000]: 2025-12-05T10:05:23Z|00120|binding|INFO|Releasing lport e51b6bce-ae77-4f5a-a75d-40a16f97a07f from this chassis (sb_readonly=0) Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.578 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.584 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.585 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.586 158820 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b18d5894-62f7-4f8f-a24c-429b8805e981.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b18d5894-62f7-4f8f-a24c-429b8805e981.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.587 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4e5b2b-89f4-430e-a187-5a6d02d83393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.588 158820 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: global Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: log /dev/log local0 debug Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: log-tag haproxy-metadata-proxy-b18d5894-62f7-4f8f-a24c-429b8805e981 Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: user root Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: group root Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: maxconn 1024 Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: pidfile /var/lib/neutron/external/pids/b18d5894-62f7-4f8f-a24c-429b8805e981.pid.haproxy Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: daemon Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: defaults Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: log global Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: mode http Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: option httplog Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: option dontlognull Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: option http-server-close Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: option forwardfor Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: retries 3 Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: timeout http-request 30s Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: timeout connect 30s Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: timeout client 32s Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: timeout server 32s Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: timeout http-keep-alive 30s Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: listen listener Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: bind 169.254.169.254:80 Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: server metadata /var/lib/neutron/metadata_proxy Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: http-request add-header X-OVN-Network-ID b18d5894-62f7-4f8f-a24c-429b8805e981 Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 5 05:05:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:23.589 158820 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981', 'env', 'PROCESS_TAG=haproxy-b18d5894-62f7-4f8f-a24c-429b8805e981', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b18d5894-62f7-4f8f-a24c-429b8805e981.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.677 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquiring lock "refresh_cache-d26c2a55-022b-4361-bccb-625a3a43e6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.677 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquired lock "refresh_cache-d26c2a55-022b-4361-bccb-625a3a43e6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:23 localhost nova_compute[280228]: 2025-12-05 10:05:23.678 280232 DEBUG nova.network.neutron [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 5 05:05:24 localhost podman[309502]: Dec 5 05:05:24 localhost podman[309502]: 2025-12-05 10:05:24.016839383 +0000 UTC m=+0.099552843 container create 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:24 localhost systemd[1]: Started libpod-conmon-8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa.scope. Dec 5 05:05:24 localhost podman[309502]: 2025-12-05 10:05:23.963078774 +0000 UTC m=+0.045792244 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 05:05:24 localhost systemd[1]: Started libcrun container. Dec 5 05:05:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6e3c7f90713f3ba445257e8e40e4328f2d7493768216eb3a393829e82592983/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:05:24 localhost podman[309502]: 2025-12-05 10:05:24.08559602 +0000 UTC m=+0.168309490 container init 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:05:24 localhost podman[309502]: 2025-12-05 10:05:24.096069761 +0000 UTC m=+0.178783241 container start 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:05:24 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [NOTICE] (309521) : New worker (309523) forked Dec 5 05:05:24 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [NOTICE] (309521) : Loading success. Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.154 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 63ba056b-fce6-4938-bf24-a421b305533d in datapath e15d083f-7984-4879-a88e-c9228d36c3fe unbound from our chassis#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.159 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 008bc10f-c5db-46c3-b293-b2868fee9576 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.160 158820 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e15d083f-7984-4879-a88e-c9228d36c3fe#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.172 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a452fe83-c9d0-4e1b-849e-0ad0c418b17a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.173 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape15d083f-71 in ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.175 158926 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape15d083f-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.175 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[566b2b64-be82-41c5-8d21-9044706e9b55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.176 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[41b4bac2-4e0d-49c7-bd94-973e28f383c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.184 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[37bece7b-4a77-40d4-8e35-94fcd035de64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.198 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd61955-4bc8-443c-8b29-24c4a87c8780]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.223 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3fb285-3373-4f43-8bf8-38bb03d3b0ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost NetworkManager[5960]: [1764929124.2351] manager: (tape15d083f-70): new Veth device (/org/freedesktop/NetworkManager/Devices/28) Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.231 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[f9408d80-c781-4a78-bcda-bef11e45b935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.295 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[64a3ed7d-f0fc-4efc-a939-2dc47a63e857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.298 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[3991956c-30e0-46cc-9f99-b297c6fa86ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tape15d083f-70: link becomes ready Dec 5 05:05:24 localhost NetworkManager[5960]: [1764929124.3174] device (tape15d083f-70): carrier: link connected Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.321 158937 DEBUG oslo.privsep.daemon [-] privsep: reply[1edfaebc-b0d1-480d-901b-ba3bdc19cf3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.339 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[44313837-8133-4cfc-a2a3-e8ca63fcc9a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape15d083f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:37:e1:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1223941, 'reachable_time': 18691, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309542, 'error': None, 'target': 'ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.357 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b894f443-d7d6-4a7d-b470-34e53bc89d56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:e146'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1223941, 'tstamp': 1223941}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309543, 'error': None, 'target': 'ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.373 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4b32f63c-8db0-4671-9f34-90ae32df84ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape15d083f-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:37:e1:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1223941, 'reachable_time': 18691, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309544, 'error': None, 'target': 'ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.404 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[6c42d7d7-6e45-406b-be28-aa77badb9b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.455 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[40016065-61a9-4798-a36d-b310f22271d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.457 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape15d083f-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.458 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.459 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape15d083f-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.461 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:24 localhost kernel: device tape15d083f-70 entered promiscuous mode Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.464 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.469 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape15d083f-70, col_values=(('external_ids', {'iface-id': '3128431f-a8bb-4804-8e0c-9de8f8a5235e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.471 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:24 localhost ovn_controller[153000]: 2025-12-05T10:05:24Z|00121|binding|INFO|Releasing lport 3128431f-a8bb-4804-8e0c-9de8f8a5235e from this chassis (sb_readonly=0) Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.472 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.475 158820 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e15d083f-7984-4879-a88e-c9228d36c3fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e15d083f-7984-4879-a88e-c9228d36c3fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.477 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1df8ff-3b1f-494c-a243-6da0640cb3b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.478 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.478 158820 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: global Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: log /dev/log local0 debug Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: log-tag haproxy-metadata-proxy-e15d083f-7984-4879-a88e-c9228d36c3fe Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: user root Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: group root Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: maxconn 1024 Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: pidfile /var/lib/neutron/external/pids/e15d083f-7984-4879-a88e-c9228d36c3fe.pid.haproxy Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: daemon Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: defaults Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: log global Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: mode http Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: option httplog Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: option dontlognull Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: option http-server-close Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: option forwardfor Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: retries 3 Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: timeout http-request 30s Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: timeout connect 30s Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: timeout client 32s Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: timeout server 32s Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: timeout http-keep-alive 30s Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: listen listener Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: bind 169.254.169.254:80 Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: server metadata /var/lib/neutron/metadata_proxy Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: http-request add-header X-OVN-Network-ID e15d083f-7984-4879-a88e-c9228d36c3fe Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 5 05:05:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:24.483 158820 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe', 'env', 'PROCESS_TAG=haproxy-e15d083f-7984-4879-a88e-c9228d36c3fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e15d083f-7984-4879-a88e-c9228d36c3fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.516 280232 DEBUG nova.network.neutron [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Updating instance_info_cache with network_info: [{"id": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "address": "fa:16:3e:57:e9:2f", "network": {"id": "b18d5894-62f7-4f8f-a24c-429b8805e981", "bridge": "br-int", "label": "tempest-LiveMigrationTest-410917755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a9b8ae2ff8fc42959dc64d209d5490df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a3a8bc-c4", "ovs_interfaceid": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.536 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Releasing lock "refresh_cache-d26c2a55-022b-4361-bccb-625a3a43e6f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.551 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.551 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.552 280232 DEBUG oslo_concurrency.lockutils [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:24 localhost nova_compute[280228]: 2025-12-05 10:05:24.557 280232 INFO nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Dec 5 05:05:24 localhost journal[202456]: Domain id=4 name='instance-00000009' uuid=d26c2a55-022b-4361-bccb-625a3a43e6f8 is tainted: custom-monitor Dec 5 05:05:24 localhost podman[309576]: Dec 5 05:05:24 localhost podman[309576]: 2025-12-05 10:05:24.92739394 +0000 UTC m=+0.102095071 container create 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:24 localhost podman[309576]: 2025-12-05 10:05:24.877661345 +0000 UTC m=+0.052362506 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 5 05:05:24 localhost systemd[1]: Started libpod-conmon-46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99.scope. Dec 5 05:05:25 localhost systemd[1]: Started libcrun container. Dec 5 05:05:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e6e5e177687d9cb4a2aaf30c6fd134583336a53612c4aa87d606c0e82346fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:05:25 localhost podman[309576]: 2025-12-05 10:05:25.041171276 +0000 UTC m=+0.215872377 container init 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:05:25 localhost podman[309576]: 2025-12-05 10:05:25.051469762 +0000 UTC m=+0.226170863 container start 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 5 05:05:25 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [NOTICE] (309593) : New worker (309595) forked Dec 5 05:05:25 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [NOTICE] (309593) : Loading success. Dec 5 05:05:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 351 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.2 MiB/s rd, 31 KiB/s wr, 144 op/s Dec 5 05:05:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:25 localhost nova_compute[280228]: 2025-12-05 10:05:25.565 280232 INFO nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Dec 5 05:05:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:25.997 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.020 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:26 localhost ovn_controller[153000]: 2025-12-05T10:05:26Z|00122|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:26 localhost ovn_controller[153000]: 2025-12-05T10:05:26Z|00123|binding|INFO|Releasing lport 8dc3951d-9e6b-4dd7-9953-6042801ec206 from this chassis (sb_readonly=0) Dec 5 05:05:26 localhost ovn_controller[153000]: 2025-12-05T10:05:26Z|00124|binding|INFO|Releasing lport 3128431f-a8bb-4804-8e0c-9de8f8a5235e from this chassis (sb_readonly=0) Dec 5 05:05:26 localhost ovn_controller[153000]: 2025-12-05T10:05:26Z|00125|binding|INFO|Releasing lport e51b6bce-ae77-4f5a-a75d-40a16f97a07f from this chassis (sb_readonly=0) Dec 5 05:05:26 localhost ovn_controller[153000]: 2025-12-05T10:05:26Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:e9:2f 10.100.0.11 Dec 5 05:05:26 localhost ovn_controller[153000]: 2025-12-05T10:05:26Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:e9:2f 10.100.0.11 Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.393 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.572 280232 INFO nova.virt.libvirt.driver [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.578 280232 DEBUG nova.compute.manager [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.600 280232 DEBUG nova.objects.instance [None req-59a5be77-4ce7-41c3-91ac-b1875e4407bd 3ebfd48d17714e29bb9d74731f7e5b38 9cf693f68af84b1c914eb01a35a9f072 - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.673 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.795 280232 DEBUG nova.compute.manager [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received event network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.796 280232 DEBUG oslo_concurrency.lockutils [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.796 280232 DEBUG oslo_concurrency.lockutils [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.797 280232 DEBUG oslo_concurrency.lockutils [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.797 280232 DEBUG nova.compute.manager [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] No waiting events found dispatching network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.798 280232 WARNING nova.compute.manager [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received unexpected event network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 for instance with vm_state active and task_state None.#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.798 280232 DEBUG nova.compute.manager [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received event network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.799 280232 DEBUG oslo_concurrency.lockutils [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.799 280232 DEBUG oslo_concurrency.lockutils [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.800 280232 DEBUG oslo_concurrency.lockutils [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.800 280232 DEBUG nova.compute.manager [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] No waiting events found dispatching network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:05:26 localhost nova_compute[280228]: 2025-12-05 10:05:26.801 280232 WARNING nova.compute.manager [req-78225fb3-d0a7-47b9-b165-864ed3953c87 req-b88c560b-e307-44fc-8d5b-a3817c16791c c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received unexpected event network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 for instance with vm_state active and task_state None.#033[00m Dec 5 05:05:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 372 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 2.1 MiB/s wr, 205 op/s Dec 5 05:05:27 localhost openstack_network_exporter[241668]: ERROR 10:05:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:05:27 localhost openstack_network_exporter[241668]: ERROR 10:05:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:05:27 localhost openstack_network_exporter[241668]: ERROR 10:05:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:05:27 localhost openstack_network_exporter[241668]: ERROR 10:05:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:05:27 localhost openstack_network_exporter[241668]: Dec 5 05:05:27 localhost openstack_network_exporter[241668]: ERROR 10:05:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:05:27 localhost openstack_network_exporter[241668]: Dec 5 05:05:27 localhost nova_compute[280228]: 2025-12-05 10:05:27.679 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 372 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 758 KiB/s rd, 2.0 MiB/s wr, 89 op/s Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.158 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Acquiring lock "d26c2a55-022b-4361-bccb-625a3a43e6f8" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.158 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.159 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Acquiring lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.159 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.159 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.161 280232 INFO nova.compute.manager [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Terminating instance#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.163 280232 DEBUG nova.compute.manager [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 5 05:05:29 localhost kernel: device tap63a3a8bc-c4 left promiscuous mode Dec 5 05:05:29 localhost NetworkManager[5960]: [1764929129.2391] device (tap63a3a8bc-c4): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.243 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00126|binding|INFO|Releasing lport 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 from this chassis (sb_readonly=0) Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00127|binding|INFO|Setting lport 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 down in Southbound Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00128|binding|INFO|Releasing lport 63ba056b-fce6-4938-bf24-a421b305533d from this chassis (sb_readonly=0) Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00129|binding|INFO|Setting lport 63ba056b-fce6-4938-bf24-a421b305533d down in Southbound Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00130|binding|INFO|Removing iface tap63a3a8bc-c4 ovn-installed in OVS Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.247 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.255 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:e9:2f 10.100.0.11'], port_security=['fa:16:3e:57:e9:2f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1901059839', 'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd26c2a55-022b-4361-bccb-625a3a43e6f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18d5894-62f7-4f8f-a24c-429b8805e981', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1901059839', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '12', 'neutron:security_group_ids': '8c9500c3-6ac9-452e-a652-72bddc07be6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2976951b-7527-4195-bee8-6ac5e692e095, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.258 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:be:46 19.80.0.128'], port_security=['fa:16:3e:c5:be:46 19.80.0.128'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-832547800', 'neutron:cidrs': '19.80.0.128/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e15d083f-7984-4879-a88e-c9228d36c3fe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-832547800', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8c9500c3-6ac9-452e-a652-72bddc07be6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=4521a7c1-80de-477c-9559-e5b4faa7b6fd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=63ba056b-fce6-4938-bf24-a421b305533d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.260 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 in datapath b18d5894-62f7-4f8f-a24c-429b8805e981 unbound from our chassis#033[00m Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00131|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00132|binding|INFO|Releasing lport 8dc3951d-9e6b-4dd7-9953-6042801ec206 from this chassis (sb_readonly=0) Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00133|binding|INFO|Releasing lport 3128431f-a8bb-4804-8e0c-9de8f8a5235e from this chassis (sb_readonly=0) Dec 5 05:05:29 localhost ovn_controller[153000]: 2025-12-05T10:05:29Z|00134|binding|INFO|Releasing lport e51b6bce-ae77-4f5a-a75d-40a16f97a07f from this chassis (sb_readonly=0) Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.265 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2d4c9e21-89e4-41d8-b63a-56cc3b856e34 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.265 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b18d5894-62f7-4f8f-a24c-429b8805e981, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.266 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[12ed5809-2d0b-4c6b-a6ab-4d57ddea4369]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.269 158820 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981 namespace which is not needed anymore#033[00m Dec 5 05:05:29 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully. Dec 5 05:05:29 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 5.162s CPU time. Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.288 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost systemd-machined[83348]: Machine qemu-4-instance-00000009 terminated. Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.295 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.403 280232 INFO nova.virt.libvirt.driver [-] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Instance destroyed successfully.#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.404 280232 DEBUG nova.objects.instance [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lazy-loading 'resources' on Instance uuid d26c2a55-022b-4361-bccb-625a3a43e6f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.424 280232 DEBUG nova.virt.libvirt.vif [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-05T10:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-755398326',display_name='tempest-LiveMigrationTest-server-755398326',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005546419.localdomain',hostname='tempest-livemigrationtest-server-755398326',id=9,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-05T10:05:10Z,launched_on='np0005546421.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='a9b8ae2ff8fc42959dc64d209d5490df',ramdisk_id='',reservation_id='r-k68tro11',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1653272297',owner_user_name='tempest-LiveMigrationTest-1653272297-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-05T10:05:26Z,user_data=None,user_id='7dbd84753cc34311a16ba30887be4b38',uuid=d26c2a55-022b-4361-bccb-625a3a43e6f8,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "address": "fa:16:3e:57:e9:2f", "network": {"id": "b18d5894-62f7-4f8f-a24c-429b8805e981", "bridge": "br-int", "label": "tempest-LiveMigrationTest-410917755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a9b8ae2ff8fc42959dc64d209d5490df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a3a8bc-c4", "ovs_interfaceid": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.426 280232 DEBUG nova.network.os_vif_util [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Converting VIF {"id": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "address": "fa:16:3e:57:e9:2f", "network": {"id": "b18d5894-62f7-4f8f-a24c-429b8805e981", "bridge": "br-int", "label": "tempest-LiveMigrationTest-410917755-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a9b8ae2ff8fc42959dc64d209d5490df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a3a8bc-c4", "ovs_interfaceid": "63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.427 280232 DEBUG nova.network.os_vif_util [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:e9:2f,bridge_name='br-int',has_traffic_filtering=True,id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6,network=Network(b18d5894-62f7-4f8f-a24c-429b8805e981),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap63a3a8bc-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.428 280232 DEBUG os_vif [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:e9:2f,bridge_name='br-int',has_traffic_filtering=True,id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6,network=Network(b18d5894-62f7-4f8f-a24c-429b8805e981),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap63a3a8bc-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.433 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.434 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a3a8bc-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.436 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.438 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.441 280232 INFO os_vif [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:e9:2f,bridge_name='br-int',has_traffic_filtering=True,id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6,network=Network(b18d5894-62f7-4f8f-a24c-429b8805e981),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap63a3a8bc-c4')#033[00m Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [NOTICE] (309521) : haproxy version is 2.8.14-c23fe91 Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [NOTICE] (309521) : path to executable is /usr/sbin/haproxy Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [WARNING] (309521) : Exiting Master process... Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [ALERT] (309521) : Current worker (309523) exited with code 143 (Terminated) Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981[309517]: [WARNING] (309521) : All workers exited. Exiting... (0) Dec 5 05:05:29 localhost systemd[1]: libpod-8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa.scope: Deactivated successfully. Dec 5 05:05:29 localhost podman[309632]: 2025-12-05 10:05:29.474524731 +0000 UTC m=+0.082193009 container died 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:05:29 localhost systemd[1]: tmp-crun.lWOyLF.mount: Deactivated successfully. Dec 5 05:05:29 localhost podman[309632]: 2025-12-05 10:05:29.541123133 +0000 UTC m=+0.148791361 container cleanup 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:29 localhost podman[309668]: 2025-12-05 10:05:29.565084967 +0000 UTC m=+0.087011588 container cleanup 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 5 05:05:29 localhost systemd[1]: libpod-conmon-8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa.scope: Deactivated successfully. Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.593 280232 DEBUG nova.compute.manager [req-44081a8b-9942-4745-b7ad-eeb234d32a92 req-1f0941cd-f1af-49bd-9d31-41293fa1bd5e c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received event network-vif-unplugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.593 280232 DEBUG oslo_concurrency.lockutils [req-44081a8b-9942-4745-b7ad-eeb234d32a92 req-1f0941cd-f1af-49bd-9d31-41293fa1bd5e c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.594 280232 DEBUG oslo_concurrency.lockutils [req-44081a8b-9942-4745-b7ad-eeb234d32a92 req-1f0941cd-f1af-49bd-9d31-41293fa1bd5e c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.594 280232 DEBUG oslo_concurrency.lockutils [req-44081a8b-9942-4745-b7ad-eeb234d32a92 req-1f0941cd-f1af-49bd-9d31-41293fa1bd5e c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.595 280232 DEBUG nova.compute.manager [req-44081a8b-9942-4745-b7ad-eeb234d32a92 req-1f0941cd-f1af-49bd-9d31-41293fa1bd5e c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] No waiting events found dispatching network-vif-unplugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.595 280232 DEBUG nova.compute.manager [req-44081a8b-9942-4745-b7ad-eeb234d32a92 req-1f0941cd-f1af-49bd-9d31-41293fa1bd5e c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received event network-vif-unplugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 5 05:05:29 localhost podman[309689]: 2025-12-05 10:05:29.625850419 +0000 UTC m=+0.064109976 container remove 8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.631 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[62c47c31-7509-4757-8ad8-e2c420ab683a]: (4, ('Fri Dec 5 10:05:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981 (8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa)\n8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa\nFri Dec 5 10:05:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981 (8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa)\n8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.633 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a90d4018-3d0a-4cb9-894d-62518bf686ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.635 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb18d5894-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.638 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost kernel: device tapb18d5894-60 left promiscuous mode Dec 5 05:05:29 localhost nova_compute[280228]: 2025-12-05 10:05:29.645 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.649 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[266eacd9-97ba-4339-8666-04b9bfd7c795]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.664 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[c7483ad4-210d-4d03-81e5-66ab19be95ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.667 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[06a44c71-a991-456e-99e0-cd0b75be6cc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.680 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b7301ad1-9832-4f96-9412-901e1274d2a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1223840, 'reachable_time': 35656, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309707, 'error': None, 'target': 'ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.683 158957 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b18d5894-62f7-4f8f-a24c-429b8805e981 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.683 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[9d7754b1-542b-4ddc-948c-529e0a85c22f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.684 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 63ba056b-fce6-4938-bf24-a421b305533d in datapath e15d083f-7984-4879-a88e-c9228d36c3fe unbound from our chassis#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.688 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 008bc10f-c5db-46c3-b293-b2868fee9576 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.689 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e15d083f-7984-4879-a88e-c9228d36c3fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.690 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a12b6a3b-9f35-487c-b24a-df92a240889b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:29.691 158820 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe namespace which is not needed anymore#033[00m Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [NOTICE] (309593) : haproxy version is 2.8.14-c23fe91 Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [NOTICE] (309593) : path to executable is /usr/sbin/haproxy Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [WARNING] (309593) : Exiting Master process... Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [ALERT] (309593) : Current worker (309595) exited with code 143 (Terminated) Dec 5 05:05:29 localhost neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe[309589]: [WARNING] (309593) : All workers exited. Exiting... (0) Dec 5 05:05:29 localhost systemd[1]: libpod-46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99.scope: Deactivated successfully. Dec 5 05:05:29 localhost podman[309725]: 2025-12-05 10:05:29.87387322 +0000 UTC m=+0.064557279 container died 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:05:29 localhost podman[309725]: 2025-12-05 10:05:29.917093115 +0000 UTC m=+0.107777114 container cleanup 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:05:29 localhost podman[309739]: 2025-12-05 10:05:29.960351431 +0000 UTC m=+0.081793538 container cleanup 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:05:29 localhost systemd[1]: libpod-conmon-46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99.scope: Deactivated successfully. Dec 5 05:05:30 localhost podman[309753]: 2025-12-05 10:05:30.02458614 +0000 UTC m=+0.085099649 container remove 46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.030 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bdf2da-d6b0-4638-88d6-8f8c1306d354]: (4, ('Fri Dec 5 10:05:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe (46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99)\n46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99\nFri Dec 5 10:05:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe (46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99)\n46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.032 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[2455a4e3-a5d8-4550-aedd-9967d0daa902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.033 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape15d083f-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.065 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:30 localhost kernel: device tape15d083f-70 left promiscuous mode Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.076 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.081 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[168b3dad-796d-463a-87e0-51d9b9a7b529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.096 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[f62392ae-1b7e-4d58-a866-e024d5ad7f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.097 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7713c8-ad03-476a-8664-666ef12a0abd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.115 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[9a020592-ba3d-472b-91c8-3c6b9f5567bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1223932, 'reachable_time': 19755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309771, 'error': None, 'target': 'ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.118 158957 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e15d083f-7984-4879-a88e-c9228d36c3fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 5 05:05:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:30.118 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[d160e6a4-f43d-4c89-8a42-3e7bea83cf84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.172 280232 INFO nova.virt.libvirt.driver [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Deleting instance files /var/lib/nova/instances/d26c2a55-022b-4361-bccb-625a3a43e6f8_del#033[00m Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.173 280232 INFO nova.virt.libvirt.driver [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Deletion of /var/lib/nova/instances/d26c2a55-022b-4361-bccb-625a3a43e6f8_del complete#033[00m Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.254 280232 INFO nova.compute.manager [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.255 280232 DEBUG oslo.service.loopingcall [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.255 280232 DEBUG nova.compute.manager [-] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 5 05:05:30 localhost nova_compute[280228]: 2025-12-05 10:05:30.256 280232 DEBUG nova.network.neutron [-] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 5 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay-e9e6e5e177687d9cb4a2aaf30c6fd134583336a53612c4aa87d606c0e82346fc-merged.mount: Deactivated successfully. Dec 5 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46ca20da48ecba4ac309351bf93a88ec3c73bb5fd5b921f4933595b46f00ea99-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:30 localhost systemd[1]: run-netns-ovnmeta\x2de15d083f\x2d7984\x2d4879\x2da88e\x2dc9228d36c3fe.mount: Deactivated successfully. Dec 5 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay-f6e3c7f90713f3ba445257e8e40e4328f2d7493768216eb3a393829e82592983-merged.mount: Deactivated successfully. Dec 5 05:05:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c2713c3c8f6307fa298f586cbfcc5293622fcdf3534905c1dabcf3bf3d79dfa-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:30 localhost systemd[1]: run-netns-ovnmeta\x2db18d5894\x2d62f7\x2d4f8f\x2da24c\x2d429b8805e981.mount: Deactivated successfully. Dec 5 05:05:31 localhost systemd[1]: tmp-crun.GMacWE.mount: Deactivated successfully. Dec 5 05:05:31 localhost podman[309789]: 2025-12-05 10:05:31.053072911 +0000 UTC m=+0.075368711 container kill 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:05:31 localhost dnsmasq[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/addn_hosts - 0 addresses Dec 5 05:05:31 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/host Dec 5 05:05:31 localhost dnsmasq-dhcp[309149]: read /var/lib/neutron/dhcp/161caaa5-5324-49ef-bee7-c70abf729b34/opts Dec 5 05:05:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v126: 177 pgs: 177 active+clean; 342 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 807 KiB/s rd, 2.1 MiB/s wr, 108 op/s Dec 5 05:05:31 localhost ovn_controller[153000]: 2025-12-05T10:05:31Z|00135|binding|INFO|Releasing lport 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc from this chassis (sb_readonly=0) Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.262 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost kernel: device tap7bdde4bd-39 left promiscuous mode Dec 5 05:05:31 localhost ovn_controller[153000]: 2025-12-05T10:05:31Z|00136|binding|INFO|Setting lport 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc down in Southbound Dec 5 05:05:31 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:31.263 261902 INFO neutron.agent.linux.ip_lib [None req-a9561b63-b230-43d5-b778-240fc62da51d - - - - - -] Device tapb62e996e-07 cannot be used as it has no MAC address#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.270 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-161caaa5-5324-49ef-bee7-c70abf729b34', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-161caaa5-5324-49ef-bee7-c70abf729b34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70b1a86dcef64e6ebbe96bd40de85c61', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c15b2407-0871-4635-90fa-4e4e349e8ccd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7bdde4bd-39aa-4d26-ab23-a58e26abd8bc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.271 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 7bdde4bd-39aa-4d26-ab23-a58e26abd8bc in datapath 161caaa5-5324-49ef-bee7-c70abf729b34 unbound from our chassis#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.273 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 161caaa5-5324-49ef-bee7-c70abf729b34, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.274 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[6f2114f5-8de4-437d-b812-ff0e45b7dab8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.283 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.292 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost kernel: device tapb62e996e-07 entered promiscuous mode Dec 5 05:05:31 localhost NetworkManager[5960]: [1764929131.2988] manager: (tapb62e996e-07): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Dec 5 05:05:31 localhost systemd-udevd[309608]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.300 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost ovn_controller[153000]: 2025-12-05T10:05:31Z|00137|binding|INFO|Claiming lport b62e996e-07b4-4ca0-8bed-ea9631c1cb21 for this chassis. Dec 5 05:05:31 localhost ovn_controller[153000]: 2025-12-05T10:05:31Z|00138|binding|INFO|b62e996e-07b4-4ca0-8bed-ea9631c1cb21: Claiming unknown Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.311 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-6889f115-bc05-4b19-b48d-aa4fe2a1d137', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6889f115-bc05-4b19-b48d-aa4fe2a1d137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '784b8d7dafc84eb8ac5fe2c56cc5f693', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2361dd1e-7145-43ae-8794-cf25eabfc4ba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b62e996e-07b4-4ca0-8bed-ea9631c1cb21) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.320 158820 INFO neutron.agent.ovn.metadata.agent [-] Port b62e996e-07b4-4ca0-8bed-ea9631c1cb21 in datapath 6889f115-bc05-4b19-b48d-aa4fe2a1d137 bound to our chassis#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.323 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6889f115-bc05-4b19-b48d-aa4fe2a1d137 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.324 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[601ba4b5-6114-4d76-9cc5-ceab9790f32e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:31 localhost ovn_controller[153000]: 2025-12-05T10:05:31Z|00139|binding|INFO|Setting lport b62e996e-07b4-4ca0-8bed-ea9631c1cb21 ovn-installed in OVS Dec 5 05:05:31 localhost ovn_controller[153000]: 2025-12-05T10:05:31Z|00140|binding|INFO|Setting lport b62e996e-07b4-4ca0-8bed-ea9631c1cb21 up in Southbound Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.357 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.395 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.413 158921 DEBUG eventlet.wsgi.server [-] (158921) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:31.414 158921 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: Accept: */*#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: Connection: close#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: Content-Type: text/plain#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: Host: 169.254.169.254#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: User-Agent: curl/7.84.0#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: X-Forwarded-For: 10.100.0.9#015 Dec 5 05:05:31 localhost ovn_metadata_agent[158815]: X-Ovn-Network-Id: 64267419-8c47-450f-9ba4-afc8c103bf71 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.428 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.637 280232 DEBUG nova.compute.manager [req-b5f86078-c7b3-437b-95ea-09398c5e046c req-68014024-57c1-49ec-8935-80f9f1abb2ef c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received event network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.637 280232 DEBUG oslo_concurrency.lockutils [req-b5f86078-c7b3-437b-95ea-09398c5e046c req-68014024-57c1-49ec-8935-80f9f1abb2ef c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.638 280232 DEBUG oslo_concurrency.lockutils [req-b5f86078-c7b3-437b-95ea-09398c5e046c req-68014024-57c1-49ec-8935-80f9f1abb2ef c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.638 280232 DEBUG oslo_concurrency.lockutils [req-b5f86078-c7b3-437b-95ea-09398c5e046c req-68014024-57c1-49ec-8935-80f9f1abb2ef c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.639 280232 DEBUG nova.compute.manager [req-b5f86078-c7b3-437b-95ea-09398c5e046c req-68014024-57c1-49ec-8935-80f9f1abb2ef c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] No waiting events found dispatching network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.639 280232 WARNING nova.compute.manager [req-b5f86078-c7b3-437b-95ea-09398c5e046c req-68014024-57c1-49ec-8935-80f9f1abb2ef c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Received unexpected event network-vif-plugged-63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6 for instance with vm_state active and task_state deleting.#033[00m Dec 5 05:05:31 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:31.666 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:04:51Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6, ip_allocation=immediate, mac_address=fa:16:3e:57:e9:2f, name=tempest-parent-1901059839, network_id=b18d5894-62f7-4f8f-a24c-429b8805e981, port_security_enabled=True, project_id=a9b8ae2ff8fc42959dc64d209d5490df, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=15, security_groups=['8c9500c3-6ac9-452e-a652-72bddc07be6d'], standard_attr_id=496, status=DOWN, tags=[], tenant_id=a9b8ae2ff8fc42959dc64d209d5490df, trunk_details=sub_ports=[], trunk_id=0c047c96-814b-4f70-a17a-8978d479cbd6, updated_at=2025-12-05T10:05:30Z on network b18d5894-62f7-4f8f-a24c-429b8805e981#033[00m Dec 5 05:05:31 localhost nova_compute[280228]: 2025-12-05 10:05:31.676 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:31 localhost systemd[1]: tmp-crun.s32IHL.mount: Deactivated successfully. Dec 5 05:05:31 localhost podman[309865]: 2025-12-05 10:05:31.865506801 +0000 UTC m=+0.049351042 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:05:31 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 2 addresses Dec 5 05:05:31 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:05:31 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:05:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:32.056 261902 INFO neutron.agent.dhcp.agent [None req-640325a3-3992-438b-a734-4dea7aced1f0 - - - - - -] DHCP configuration for ports {'63a3a8bc-c4ec-40c3-9a18-86f4d447a3c6'} is completed#033[00m Dec 5 05:05:32 localhost systemd[1]: Stopping User Manager for UID 42436... Dec 5 05:05:32 localhost systemd[309322]: Activating special unit Exit the Session... Dec 5 05:05:32 localhost systemd[309322]: Stopped target Main User Target. Dec 5 05:05:32 localhost systemd[309322]: Stopped target Basic System. Dec 5 05:05:32 localhost systemd[309322]: Stopped target Paths. Dec 5 05:05:32 localhost systemd[309322]: Stopped target Sockets. Dec 5 05:05:32 localhost systemd[309322]: Stopped target Timers. Dec 5 05:05:32 localhost systemd[309322]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 5 05:05:32 localhost systemd[309322]: Stopped Daily Cleanup of User's Temporary Directories. Dec 5 05:05:32 localhost systemd[309322]: Closed D-Bus User Message Bus Socket. Dec 5 05:05:32 localhost systemd[309322]: Stopped Create User's Volatile Files and Directories. Dec 5 05:05:32 localhost systemd[309322]: Removed slice User Application Slice. Dec 5 05:05:32 localhost systemd[309322]: Reached target Shutdown. Dec 5 05:05:32 localhost systemd[309322]: Finished Exit the Session. Dec 5 05:05:32 localhost systemd[309322]: Reached target Exit the Session. Dec 5 05:05:32 localhost systemd[1]: user@42436.service: Deactivated successfully. Dec 5 05:05:32 localhost systemd[1]: Stopped User Manager for UID 42436. Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.113 280232 DEBUG nova.network.neutron [-] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.132 280232 INFO nova.compute.manager [-] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Took 1.88 seconds to deallocate network for instance.#033[00m Dec 5 05:05:32 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Dec 5 05:05:32 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Dec 5 05:05:32 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Dec 5 05:05:32 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Dec 5 05:05:32 localhost systemd[1]: Removed slice User Slice of UID 42436. Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.191 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.191 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.194 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.234 280232 INFO nova.scheduler.client.report [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Deleted allocations for instance d26c2a55-022b-4361-bccb-625a3a43e6f8#033[00m Dec 5 05:05:32 localhost podman[309913]: Dec 5 05:05:32 localhost podman[309913]: 2025-12-05 10:05:32.37117953 +0000 UTC m=+0.107555547 container create f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:05:32 localhost podman[309913]: 2025-12-05 10:05:32.310660315 +0000 UTC m=+0.047036372 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:05:32 localhost systemd[1]: Started libpod-conmon-f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428.scope. Dec 5 05:05:32 localhost systemd[1]: Started libcrun container. Dec 5 05:05:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3337e7002dc0679805eb9951900ec135fdd2582d7032fc02be99e09f7b8082/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:05:32 localhost podman[309913]: 2025-12-05 10:05:32.453591185 +0000 UTC m=+0.189967222 container init f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:32 localhost podman[309913]: 2025-12-05 10:05:32.462994774 +0000 UTC m=+0.199370821 container start f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:32 localhost dnsmasq[309931]: started, version 2.85 cachesize 150 Dec 5 05:05:32 localhost dnsmasq[309931]: DNS service limited to local subnets Dec 5 05:05:32 localhost dnsmasq[309931]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:05:32 localhost dnsmasq[309931]: warning: no upstream servers configured Dec 5 05:05:32 localhost dnsmasq-dhcp[309931]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:05:32 localhost dnsmasq[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/addn_hosts - 0 addresses Dec 5 05:05:32 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/host Dec 5 05:05:32 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/opts Dec 5 05:05:32 localhost nova_compute[280228]: 2025-12-05 10:05:32.947 280232 DEBUG oslo_concurrency.lockutils [None req-f0b0d867-692c-4688-b571-b7d088e9113a 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Lock "d26c2a55-022b-4361-bccb-625a3a43e6f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:33 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:33.000 261902 INFO neutron.agent.dhcp.agent [None req-7d73b9eb-5ecb-4ab6-961e-61170ef7f24c - - - - - -] DHCP configuration for ports {'d8d4bfc9-05dc-49f2-97de-9852741ef666'} is completed#033[00m Dec 5 05:05:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 306 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 882 KiB/s rd, 2.1 MiB/s wr, 138 op/s Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.276 158921 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 5 05:05:33 localhost haproxy-metadata-proxy-64267419-8c47-450f-9ba4-afc8c103bf71[308732]: 10.100.0.9:46832 [05/Dec/2025:10:05:31.411] listener listener/metadata 0/0/0/1865/1865 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1" Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.277 158921 INFO eventlet.wsgi.server [-] 10.100.0.9, "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200 len: 1673 time: 1.8629303#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.503 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.504 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.505 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.506 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.506 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.508 280232 INFO nova.compute.manager [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Terminating instance#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.511 280232 DEBUG nova.compute.manager [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 5 05:05:33 localhost kernel: device tap24f19dd4-10 left promiscuous mode Dec 5 05:05:33 localhost NetworkManager[5960]: [1764929133.5807] device (tap24f19dd4-10): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 5 05:05:33 localhost ovn_controller[153000]: 2025-12-05T10:05:33Z|00141|binding|INFO|Releasing lport 24f19dd4-108e-4a77-b44d-59a215801baa from this chassis (sb_readonly=0) Dec 5 05:05:33 localhost ovn_controller[153000]: 2025-12-05T10:05:33Z|00142|binding|INFO|Setting lport 24f19dd4-108e-4a77-b44d-59a215801baa down in Southbound Dec 5 05:05:33 localhost ovn_controller[153000]: 2025-12-05T10:05:33Z|00143|binding|INFO|Removing iface tap24f19dd4-10 ovn-installed in OVS Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.595 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.609 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.604 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:5d:78 10.100.0.9'], port_security=['fa:16:3e:5a:5d:78 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64267419-8c47-450f-9ba4-afc8c103bf71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41095831ac6247b0a5ea030490af998f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13f09786-c3de-4f80-a431-bd4239c2ee01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9200024c-1bb2-4d9b-96df-67796d72a9e4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=24f19dd4-108e-4a77-b44d-59a215801baa) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.606 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 24f19dd4-108e-4a77-b44d-59a215801baa in datapath 64267419-8c47-450f-9ba4-afc8c103bf71 unbound from our chassis#033[00m Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.611 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64267419-8c47-450f-9ba4-afc8c103bf71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.613 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[c365901c-609d-4be6-a879-522b8fe35bf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:33.614 158820 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71 namespace which is not needed anymore#033[00m Dec 5 05:05:33 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully. Dec 5 05:05:33 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 15.767s CPU time. Dec 5 05:05:33 localhost systemd-machined[83348]: Machine qemu-3-instance-00000008 terminated. Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.756 280232 INFO nova.virt.libvirt.driver [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Instance destroyed successfully.#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.757 280232 DEBUG nova.objects.instance [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lazy-loading 'resources' on Instance uuid fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.773 280232 DEBUG nova.virt.libvirt.vif [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T10:04:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005546419.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=8,image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzw2WthzVtIu/DY1cGzri7BzSxU7HmCS+BoNdsA+GCtf3cs/xodQfr3DgzYnuyWAEHd/1zmcc7jWm6fN8dTApTgVQsczPN9oKYNm0Ya0z7VFTAoq2drLjEEcX3knjQC2Q==',key_name='tempest-keypair-1076229536',keypairs=,launch_index=0,launched_at=2025-12-05T10:04:55Z,launched_on='np0005546419.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005546419.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='41095831ac6247b0a5ea030490af998f',ramdisk_id='',reservation_id='r-ofrvdd2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='3647d20f-5e09-41b2-a6f3-f320b9e4e343',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1788105697',owner_user_name='tempest-ServersV294TestFqdnHostnames-1788105697-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-05T10:04:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7ee4999d08044f63bf075e92f0ca5d11',uuid=fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.774 280232 DEBUG nova.network.os_vif_util [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Converting VIF {"id": "24f19dd4-108e-4a77-b44d-59a215801baa", "address": "fa:16:3e:5a:5d:78", "network": {"id": "64267419-8c47-450f-9ba4-afc8c103bf71", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-159398337-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "41095831ac6247b0a5ea030490af998f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24f19dd4-10", "ovs_interfaceid": "24f19dd4-108e-4a77-b44d-59a215801baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.775 280232 DEBUG nova.network.os_vif_util [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.775 280232 DEBUG os_vif [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.777 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.778 280232 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24f19dd4-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.780 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.781 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:33 localhost nova_compute[280228]: 2025-12-05 10:05:33.784 280232 INFO os_vif [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:5d:78,bridge_name='br-int',has_traffic_filtering=True,id=24f19dd4-108e-4a77-b44d-59a215801baa,network=Network(64267419-8c47-450f-9ba4-afc8c103bf71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24f19dd4-10')#033[00m Dec 5 05:05:33 localhost systemd[1]: tmp-crun.ZAHiv0.mount: Deactivated successfully. Dec 5 05:05:33 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [NOTICE] (308729) : haproxy version is 2.8.14-c23fe91 Dec 5 05:05:33 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [NOTICE] (308729) : path to executable is /usr/sbin/haproxy Dec 5 05:05:33 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [WARNING] (308729) : Exiting Master process... Dec 5 05:05:33 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [WARNING] (308729) : Exiting Master process... Dec 5 05:05:33 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [ALERT] (308729) : Current worker (308732) exited with code 143 (Terminated) Dec 5 05:05:33 localhost neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71[308724]: [WARNING] (308729) : All workers exited. Exiting... (0) Dec 5 05:05:33 localhost systemd[1]: libpod-1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12.scope: Deactivated successfully. Dec 5 05:05:33 localhost podman[309954]: 2025-12-05 10:05:33.837825521 +0000 UTC m=+0.095098996 container died 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:05:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:33 localhost systemd[1]: var-lib-containers-storage-overlay-6e6cf32151b7d14e9adf4ec899f780f48f1f0c971aec6cef601c989e0d1c1200-merged.mount: Deactivated successfully. Dec 5 05:05:33 localhost podman[309954]: 2025-12-05 10:05:33.887892826 +0000 UTC m=+0.145166311 container cleanup 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:33 localhost systemd[1]: tmp-crun.zO8PZm.mount: Deactivated successfully. Dec 5 05:05:33 localhost podman[309993]: 2025-12-05 10:05:33.965945428 +0000 UTC m=+0.121935027 container cleanup 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:05:33 localhost systemd[1]: libpod-conmon-1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12.scope: Deactivated successfully. Dec 5 05:05:33 localhost podman[310011]: 2025-12-05 10:05:33.996455703 +0000 UTC m=+0.090408781 container remove 1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.001 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[05e11525-88b8-4050-9c1e-d6690bf71e72]: (4, ('Fri Dec 5 10:05:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71 (1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12)\n1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12\nFri Dec 5 10:05:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71 (1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12)\n1ff407f43189ae8124125dabe038a4b158689cd94d9ae4bae072bd7b2b180e12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.003 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[017f63fd-6dec-4319-8f39-884062e10798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.004 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64267419-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.006 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:34 localhost kernel: device tap64267419-80 left promiscuous mode Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.013 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.018 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[f85d2447-d725-4051-9c0b-394434b056c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.034 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[27a2dcf5-0f01-4256-b013-a15da3d3758a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.035 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7a722875-eb2c-4db6-9499-77ab3a7ad60a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.046 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[16c50ca4-d698-4da5-8c13-39d72a9cca9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1220990, 'reachable_time': 31346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310029, 'error': None, 'target': 'ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.049 158957 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64267419-8c47-450f-9ba4-afc8c103bf71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 5 05:05:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:34.049 158957 DEBUG oslo.privsep.daemon [-] privsep: reply[269d9f43-aec4-42fb-9ea1-6f35c39e6716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.510 280232 INFO nova.virt.libvirt.driver [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Deleting instance files /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_del#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.511 280232 INFO nova.virt.libvirt.driver [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Deletion of /var/lib/nova/instances/fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71_del complete#033[00m Dec 5 05:05:34 localhost ovn_controller[153000]: 2025-12-05T10:05:34Z|00144|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.614 280232 INFO nova.compute.manager [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.615 280232 DEBUG oslo.service.loopingcall [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.616 280232 DEBUG nova.compute.manager [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.616 280232 DEBUG nova.network.neutron [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 5 05:05:34 localhost nova_compute[280228]: 2025-12-05 10:05:34.680 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:34 localhost systemd[1]: run-netns-ovnmeta\x2d64267419\x2d8c47\x2d450f\x2d9ba4\x2dafc8c103bf71.mount: Deactivated successfully. Dec 5 05:05:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 306 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 878 KiB/s rd, 2.1 MiB/s wr, 134 op/s Dec 5 05:05:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:35 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:35.969 2 INFO neutron.agent.securitygroups_rpc [None req-01829f80-3210-49b7-8dc5-90fc991ad5c8 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']#033[00m Dec 5 05:05:36 localhost systemd[1]: tmp-crun.3Z1ftb.mount: Deactivated successfully. Dec 5 05:05:36 localhost dnsmasq[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/addn_hosts - 0 addresses Dec 5 05:05:36 localhost dnsmasq-dhcp[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/host Dec 5 05:05:36 localhost dnsmasq-dhcp[308819]: read /var/lib/neutron/dhcp/e15d083f-7984-4879-a88e-c9228d36c3fe/opts Dec 5 05:05:36 localhost podman[310046]: 2025-12-05 10:05:36.263405172 +0000 UTC m=+0.069330485 container kill 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.266 280232 DEBUG nova.compute.manager [req-aca66804-97ee-4295-9fe5-bada4297d9eb req-62b2139b-db19-44ae-8cd4-1714155edc6f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-vif-unplugged-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.267 280232 DEBUG oslo_concurrency.lockutils [req-aca66804-97ee-4295-9fe5-bada4297d9eb req-62b2139b-db19-44ae-8cd4-1714155edc6f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.267 280232 DEBUG oslo_concurrency.lockutils [req-aca66804-97ee-4295-9fe5-bada4297d9eb req-62b2139b-db19-44ae-8cd4-1714155edc6f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.268 280232 DEBUG oslo_concurrency.lockutils [req-aca66804-97ee-4295-9fe5-bada4297d9eb req-62b2139b-db19-44ae-8cd4-1714155edc6f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.269 280232 DEBUG nova.compute.manager [req-aca66804-97ee-4295-9fe5-bada4297d9eb req-62b2139b-db19-44ae-8cd4-1714155edc6f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] No waiting events found dispatching network-vif-unplugged-24f19dd4-108e-4a77-b44d-59a215801baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.269 280232 DEBUG nova.compute.manager [req-aca66804-97ee-4295-9fe5-bada4297d9eb req-62b2139b-db19-44ae-8cd4-1714155edc6f c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-vif-unplugged-24f19dd4-108e-4a77-b44d-59a215801baa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 5 05:05:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e101 do_prune osdmap full prune enabled Dec 5 05:05:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e102 e102: 6 total, 6 up, 6 in Dec 5 05:05:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e102: 6 total, 6 up, 6 in Dec 5 05:05:36 localhost nova_compute[280228]: 2025-12-05 10:05:36.682 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:05:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 226 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 181 KiB/s rd, 138 KiB/s wr, 102 op/s Dec 5 05:05:37 localhost podman[310066]: 2025-12-05 10:05:37.237804896 +0000 UTC m=+0.115482420 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:05:37 localhost podman[310067]: 2025-12-05 10:05:37.272752548 +0000 UTC m=+0.148828604 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 5 05:05:37 localhost podman[310067]: 2025-12-05 10:05:37.286681265 +0000 UTC m=+0.162757291 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:05:37 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:05:37 localhost podman[310066]: 2025-12-05 10:05:37.349559831 +0000 UTC m=+0.227237395 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:05:37 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:05:37 localhost systemd[1]: tmp-crun.bX4GAA.mount: Deactivated successfully. Dec 5 05:05:37 localhost podman[310068]: 2025-12-05 10:05:37.454343042 +0000 UTC m=+0.327652103 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 5 05:05:37 localhost podman[310068]: 2025-12-05 10:05:37.493341098 +0000 UTC m=+0.366650199 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:37 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:05:37 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:37.739 2 INFO neutron.agent.securitygroups_rpc [req-735a2183-c960-481e-9eb4-785a9e1cdde3 req-9ac83efd-8db5-413d-ab08-ac7a6bf5cdc1 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Security group member updated ['13f09786-c3de-4f80-a431-bd4239c2ee01']#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.323 280232 DEBUG nova.compute.manager [req-1e3beba7-f747-4157-bcd8-9a1ee5bc587d req-342dfae5-7d86-4129-9b59-484c328d4e81 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.324 280232 DEBUG oslo_concurrency.lockutils [req-1e3beba7-f747-4157-bcd8-9a1ee5bc587d req-342dfae5-7d86-4129-9b59-484c328d4e81 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Acquiring lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.324 280232 DEBUG oslo_concurrency.lockutils [req-1e3beba7-f747-4157-bcd8-9a1ee5bc587d req-342dfae5-7d86-4129-9b59-484c328d4e81 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.324 280232 DEBUG oslo_concurrency.lockutils [req-1e3beba7-f747-4157-bcd8-9a1ee5bc587d req-342dfae5-7d86-4129-9b59-484c328d4e81 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.324 280232 DEBUG nova.compute.manager [req-1e3beba7-f747-4157-bcd8-9a1ee5bc587d req-342dfae5-7d86-4129-9b59-484c328d4e81 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] No waiting events found dispatching network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.324 280232 WARNING nova.compute.manager [req-1e3beba7-f747-4157-bcd8-9a1ee5bc587d req-342dfae5-7d86-4129-9b59-484c328d4e81 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received unexpected event network-vif-plugged-24f19dd4-108e-4a77-b44d-59a215801baa for instance with vm_state active and task_state deleting.#033[00m Dec 5 05:05:38 localhost ovn_controller[153000]: 2025-12-05T10:05:38Z|00145|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.481 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e102 do_prune osdmap full prune enabled Dec 5 05:05:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e103 e103: 6 total, 6 up, 6 in Dec 5 05:05:38 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e103: 6 total, 6 up, 6 in Dec 5 05:05:38 localhost dnsmasq[308819]: exiting on receipt of SIGTERM Dec 5 05:05:38 localhost podman[310144]: 2025-12-05 10:05:38.632374848 +0000 UTC m=+0.064795037 container kill 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:05:38 localhost systemd[1]: libpod-1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140.scope: Deactivated successfully. Dec 5 05:05:38 localhost podman[310158]: 2025-12-05 10:05:38.693326676 +0000 UTC m=+0.043202006 container died 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:05:38 localhost systemd[1]: tmp-crun.ZlADTV.mount: Deactivated successfully. Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.721 280232 DEBUG nova.network.neutron [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:38 localhost podman[310158]: 2025-12-05 10:05:38.730084242 +0000 UTC m=+0.079959542 container cleanup 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:05:38 localhost systemd[1]: libpod-conmon-1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140.scope: Deactivated successfully. Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.741 280232 INFO nova.compute.manager [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Took 4.12 seconds to deallocate network for instance.#033[00m Dec 5 05:05:38 localhost podman[310160]: 2025-12-05 10:05:38.767124627 +0000 UTC m=+0.112778817 container remove 1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e15d083f-7984-4879-a88e-c9228d36c3fe, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.778 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.778 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:38 localhost ovn_controller[153000]: 2025-12-05T10:05:38Z|00146|binding|INFO|Releasing lport 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f from this chassis (sb_readonly=0) Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.779 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:38 localhost kernel: device tap0a18e7f0-8a left promiscuous mode Dec 5 05:05:38 localhost ovn_controller[153000]: 2025-12-05T10:05:38Z|00147|binding|INFO|Setting lport 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f down in Southbound Dec 5 05:05:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:38.786 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-e15d083f-7984-4879-a88e-c9228d36c3fe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e15d083f-7984-4879-a88e-c9228d36c3fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4521a7c1-80de-477c-9559-e5b4faa7b6fd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0a18e7f0-8a64-414c-b78f-bf5d472c1d5f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:38.788 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0a18e7f0-8a64-414c-b78f-bf5d472c1d5f in datapath e15d083f-7984-4879-a88e-c9228d36c3fe unbound from our chassis#033[00m Dec 5 05:05:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:38.792 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e15d083f-7984-4879-a88e-c9228d36c3fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:38.793 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[df72e653-310a-4c8c-af2b-48087c1262bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.807 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:38 localhost nova_compute[280228]: 2025-12-05 10:05:38.835 280232 DEBUG oslo_concurrency.processutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 226 MiB data, 892 MiB used, 41 GiB / 42 GiB avail; 154 KiB/s rd, 79 KiB/s wr, 100 op/s Dec 5 05:05:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:05:39 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1737133720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:05:39 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:39.268 261902 INFO neutron.agent.dhcp.agent [None req-d8189515-8ac2-4600-87d6-c7481a26c0a5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.284 280232 DEBUG oslo_concurrency.processutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.291 280232 DEBUG nova.compute.provider_tree [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:05:39 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:39.383 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.392 280232 DEBUG nova.scheduler.client.report [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.418 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.444 280232 INFO nova.scheduler.client.report [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Deleted allocations for instance fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71#033[00m Dec 5 05:05:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e103 do_prune osdmap full prune enabled Dec 5 05:05:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e104 e104: 6 total, 6 up, 6 in Dec 5 05:05:39 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e104: 6 total, 6 up, 6 in Dec 5 05:05:39 localhost dnsmasq[309149]: exiting on receipt of SIGTERM Dec 5 05:05:39 localhost podman[310225]: 2025-12-05 10:05:39.611292139 +0000 UTC m=+0.073271256 container kill 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:05:39 localhost systemd[1]: libpod-3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b.scope: Deactivated successfully. Dec 5 05:05:39 localhost systemd[1]: var-lib-containers-storage-overlay-97063e085a0521f15bb4ef8e4ef1288f2f9fa2a53da1890acbd82e4a673dda3d-merged.mount: Deactivated successfully. Dec 5 05:05:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c70798cb55b2ace30342e5ae7bf3610156cda3a87da043441b6d97ca726b140-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:39 localhost systemd[1]: run-netns-qdhcp\x2de15d083f\x2d7984\x2d4879\x2da88e\x2dc9228d36c3fe.mount: Deactivated successfully. Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.642 280232 DEBUG oslo_concurrency.lockutils [None req-735a2183-c960-481e-9eb4-785a9e1cdde3 7ee4999d08044f63bf075e92f0ca5d11 41095831ac6247b0a5ea030490af998f - - default default] Lock "fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 6.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:39 localhost podman[310237]: 2025-12-05 10:05:39.697859603 +0000 UTC m=+0.073166894 container died 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:05:39 localhost podman[310237]: 2025-12-05 10:05:39.729290266 +0000 UTC m=+0.104597527 container cleanup 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:05:39 localhost systemd[1]: libpod-conmon-3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b.scope: Deactivated successfully. Dec 5 05:05:39 localhost podman[310239]: 2025-12-05 10:05:39.778888836 +0000 UTC m=+0.139521847 container remove 3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-161caaa5-5324-49ef-bee7-c70abf729b34, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:05:39 localhost ovn_controller[153000]: 2025-12-05T10:05:39Z|00148|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.863 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:39 localhost nova_compute[280228]: 2025-12-05 10:05:39.890 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:40.173 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:40.178 261902 INFO neutron.agent.dhcp.agent [None req-4d2ba598-e272-4f5b-b289-bfc66f99e527 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:40.332 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:40 localhost nova_compute[280228]: 2025-12-05 10:05:40.631 280232 DEBUG nova.compute.manager [req-2fd26ce9-bb30-4777-ada8-e9bc9c774433 req-d40e5e1c-2e29-478c-ab9f-67b91624abf2 c09787ff15134c9896057a29fefbe933 d98f9ffaeb7346169078ece01c85312d - - default default] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Received event network-vif-deleted-24f19dd4-108e-4a77-b44d-59a215801baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 5 05:05:40 localhost systemd[1]: var-lib-containers-storage-overlay-c16b8bc2891c944a106b96b112a0ace6d0589c94208d18981c1cdf482a771bdd-merged.mount: Deactivated successfully. Dec 5 05:05:40 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f5cf34f4b2b87c560ceab29a33d59fbbc09be30f90708ee209dbf7e6d0a826b-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:40 localhost systemd[1]: run-netns-qdhcp\x2d161caaa5\x2d5324\x2d49ef\x2dbee7\x2dc70abf729b34.mount: Deactivated successfully. Dec 5 05:05:41 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:41.106 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 265 MiB data, 952 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 125 op/s Dec 5 05:05:41 localhost nova_compute[280228]: 2025-12-05 10:05:41.683 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:41 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:41.723 2 INFO neutron.agent.securitygroups_rpc [None req-e7017f28-3240-4b44-85ae-7a3dc282f638 7dbd84753cc34311a16ba30887be4b38 a9b8ae2ff8fc42959dc64d209d5490df - - default default] Security group member updated ['8c9500c3-6ac9-452e-a652-72bddc07be6d']#033[00m Dec 5 05:05:41 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:41.906 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:05:41Z, description=, device_id=56536c31-1f80-4677-8f97-54e98488a129, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0b2e0757-d92a-41c4-b08d-1459e7e1e04b, ip_allocation=immediate, mac_address=fa:16:3e:6b:7d:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:05:28Z, description=, dns_domain=, id=6889f115-bc05-4b19-b48d-aa4fe2a1d137, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1594311257-network, port_security_enabled=True, project_id=784b8d7dafc84eb8ac5fe2c56cc5f693, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5060, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=640, status=ACTIVE, subnets=['a3548381-b90a-4ce6-b42b-49b90b0ebd3c'], tags=[], tenant_id=784b8d7dafc84eb8ac5fe2c56cc5f693, updated_at=2025-12-05T10:05:30Z, vlan_transparent=None, network_id=6889f115-bc05-4b19-b48d-aa4fe2a1d137, port_security_enabled=False, project_id=784b8d7dafc84eb8ac5fe2c56cc5f693, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=660, status=DOWN, tags=[], tenant_id=784b8d7dafc84eb8ac5fe2c56cc5f693, updated_at=2025-12-05T10:05:41Z on network 6889f115-bc05-4b19-b48d-aa4fe2a1d137#033[00m Dec 5 05:05:42 localhost systemd[1]: tmp-crun.BJzOkB.mount: Deactivated successfully. Dec 5 05:05:42 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 1 addresses Dec 5 05:05:42 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:05:42 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:05:42 localhost podman[310286]: 2025-12-05 10:05:42.06294538 +0000 UTC m=+0.076627770 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:05:42 localhost podman[310317]: 2025-12-05 10:05:42.206566882 +0000 UTC m=+0.063451787 container kill f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:05:42 localhost dnsmasq[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/addn_hosts - 1 addresses Dec 5 05:05:42 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/host Dec 5 05:05:42 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/opts Dec 5 05:05:42 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:42.446 261902 INFO neutron.agent.dhcp.agent [None req-971d6f98-d12c-4835-b927-5e405544257a - - - - - -] DHCP configuration for ports {'0b2e0757-d92a-41c4-b08d-1459e7e1e04b'} is completed#033[00m Dec 5 05:05:42 localhost dnsmasq[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/addn_hosts - 0 addresses Dec 5 05:05:42 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/host Dec 5 05:05:42 localhost dnsmasq-dhcp[307415]: read /var/lib/neutron/dhcp/b18d5894-62f7-4f8f-a24c-429b8805e981/opts Dec 5 05:05:42 localhost podman[310361]: 2025-12-05 10:05:42.798872834 +0000 UTC m=+0.065517088 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:43 localhost ovn_controller[153000]: 2025-12-05T10:05:43Z|00149|binding|INFO|Releasing lport 21758fda-1427-448d-a792-4daeda35aed6 from this chassis (sb_readonly=0) Dec 5 05:05:43 localhost kernel: device tap21758fda-14 left promiscuous mode Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.051 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:43 localhost ovn_controller[153000]: 2025-12-05T10:05:43Z|00150|binding|INFO|Setting lport 21758fda-1427-448d-a792-4daeda35aed6 down in Southbound Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.068 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:43.070 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b18d5894-62f7-4f8f-a24c-429b8805e981', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b18d5894-62f7-4f8f-a24c-429b8805e981', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9b8ae2ff8fc42959dc64d209d5490df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2976951b-7527-4195-bee8-6ac5e692e095, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=21758fda-1427-448d-a792-4daeda35aed6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:05:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:43.072 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 21758fda-1427-448d-a792-4daeda35aed6 in datapath b18d5894-62f7-4f8f-a24c-429b8805e981 unbound from our chassis#033[00m Dec 5 05:05:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:43.075 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b18d5894-62f7-4f8f-a24c-429b8805e981, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:05:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:05:43.076 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[880885c0-ba83-4ab7-a840-240c4be956f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:05:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 307 MiB data, 1020 MiB used, 41 GiB / 42 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 148 op/s Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.603 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Acquiring lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.604 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.604 280232 INFO nova.compute.manager [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Unshelving#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.717 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.718 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.720 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'pci_requests' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.750 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'numa_topology' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.763 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.763 280232 INFO nova.compute.claims [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Claim successful on node np0005546419.localdomain#033[00m Dec 5 05:05:43 localhost nova_compute[280228]: 2025-12-05 10:05:43.780 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:05:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:05:44 localhost systemd[1]: tmp-crun.HCCoCt.mount: Deactivated successfully. Dec 5 05:05:44 localhost podman[310385]: 2025-12-05 10:05:44.213850651 +0000 UTC m=+0.097739046 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.253 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:44 localhost podman[310384]: 2025-12-05 10:05:44.270927981 +0000 UTC m=+0.155964041 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:05:44 localhost podman[310385]: 2025-12-05 10:05:44.276788621 +0000 UTC m=+0.160677016 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:05:44 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:05:44 localhost podman[310384]: 2025-12-05 10:05:44.347643342 +0000 UTC m=+0.232679382 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:05:44 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.401 280232 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.402 280232 INFO nova.compute.manager [-] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] VM Stopped (Lifecycle Event)#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.424 280232 DEBUG nova.compute.manager [None req-942a832a-14e3-4970-974e-278852e732d1 - - - - - -] [instance: d26c2a55-022b-4361-bccb-625a3a43e6f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:05:44 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1463241569' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.709 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:44.710 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:05:41Z, description=, device_id=56536c31-1f80-4677-8f97-54e98488a129, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0b2e0757-d92a-41c4-b08d-1459e7e1e04b, ip_allocation=immediate, mac_address=fa:16:3e:6b:7d:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:05:28Z, description=, dns_domain=, id=6889f115-bc05-4b19-b48d-aa4fe2a1d137, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1594311257-network, port_security_enabled=True, project_id=784b8d7dafc84eb8ac5fe2c56cc5f693, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5060, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=640, status=ACTIVE, subnets=['a3548381-b90a-4ce6-b42b-49b90b0ebd3c'], tags=[], tenant_id=784b8d7dafc84eb8ac5fe2c56cc5f693, updated_at=2025-12-05T10:05:30Z, vlan_transparent=None, network_id=6889f115-bc05-4b19-b48d-aa4fe2a1d137, port_security_enabled=False, project_id=784b8d7dafc84eb8ac5fe2c56cc5f693, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=660, status=DOWN, tags=[], tenant_id=784b8d7dafc84eb8ac5fe2c56cc5f693, updated_at=2025-12-05T10:05:41Z on network 6889f115-bc05-4b19-b48d-aa4fe2a1d137#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.717 280232 DEBUG nova.compute.provider_tree [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.745 280232 DEBUG nova.scheduler.client.report [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.773 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.898 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Acquiring lock "refresh_cache-46a0f7d8-3474-49a0-9e40-e9ee2de6864c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.898 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Acquired lock "refresh_cache-46a0f7d8-3474-49a0-9e40-e9ee2de6864c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.899 280232 DEBUG nova.network.neutron [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 5 05:05:44 localhost dnsmasq[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/addn_hosts - 1 addresses Dec 5 05:05:44 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/host Dec 5 05:05:44 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/opts Dec 5 05:05:44 localhost podman[310472]: 2025-12-05 10:05:44.941872064 +0000 UTC m=+0.064194288 container kill f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:44 localhost nova_compute[280228]: 2025-12-05 10:05:44.991 280232 DEBUG nova.network.neutron [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 5 05:05:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:05:45 Dec 5 05:05:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:05:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:05:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_data', 'vms', 'volumes', 'manila_metadata', '.mgr', 'images', 'backups'] Dec 5 05:05:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:05:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 307 MiB data, 1020 MiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 122 op/s Dec 5 05:05:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:05:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.183 280232 DEBUG nova.network.neutron [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:05:45 localhost systemd[1]: tmp-crun.r8v42N.mount: Deactivated successfully. Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006577665526707612 of space, bias 1.0, pg target 1.3155331053415225 quantized to 32 (current 32) Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007542050530429927 of space, bias 1.0, pg target 1.5008680555555556 quantized to 32 (current 32) Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:05:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019400387353433835 quantized to 16 (current 16) Dec 5 05:05:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:05:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:05:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:05:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.213 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Releasing lock "refresh_cache-46a0f7d8-3474-49a0-9e40-e9ee2de6864c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.216 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.216 280232 INFO nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Creating image(s)#033[00m Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:05:45 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:45.222 261902 INFO neutron.agent.dhcp.agent [None req-76922331-d2c7-492e-8f78-8fcc5b1a76f6 - - - - - -] DHCP configuration for ports {'0b2e0757-d92a-41c4-b08d-1459e7e1e04b'} is completed#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.254 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] rbd image 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.258 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:05:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.310 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] rbd image 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.345 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] rbd image 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.350 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Acquiring lock "7288f2d04a9795a955ad3bd05fcb32e640d108b5" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.351 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "7288f2d04a9795a955ad3bd05fcb32e640d108b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.390 280232 DEBUG nova.virt.libvirt.imagebackend [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Image locations are: [{'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/f67023e2-353d-4de8-b5e5-7921df600775/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/f67023e2-353d-4de8-b5e5-7921df600775/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Dec 5 05:05:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e104 do_prune osdmap full prune enabled Dec 5 05:05:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e105 e105: 6 total, 6 up, 6 in Dec 5 05:05:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e105: 6 total, 6 up, 6 in Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.479 280232 DEBUG nova.virt.libvirt.imagebackend [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Selected location: {'url': 'rbd://79feddb1-4bfc-557f-83b9-0d57c9f66c1b/images/f67023e2-353d-4de8-b5e5-7921df600775/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.481 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] cloning images/f67023e2-353d-4de8-b5e5-7921df600775@snap to None/46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.749 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "7288f2d04a9795a955ad3bd05fcb32e640d108b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:45 localhost nova_compute[280228]: 2025-12-05 10:05:45.973 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'migration_context' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:46 localhost nova_compute[280228]: 2025-12-05 10:05:46.090 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] flattening vms/46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Dec 5 05:05:46 localhost nova_compute[280228]: 2025-12-05 10:05:46.686 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.057 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Image rbd:vms/46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.059 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.059 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Ensure instance console log exists: /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.060 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.060 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.061 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.064 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-05T10:05:11Z,direct_url=,disk_format='raw',id=f67023e2-353d-4de8-b5e5-7921df600775,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1049485283-shelved',owner='c5f7af554c09472ab3fd27dd1c35c543',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-12-05T10:05:40Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'device_type': 'disk', 'size': 0, 'encrypted': False, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'guest_format': None, 'image_id': '3647d20f-5e09-41b2-a6f3-f320b9e4e343'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.069 280232 WARNING nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.071 280232 DEBUG nova.virt.libvirt.host [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Searching host: 'np0005546419.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.072 280232 DEBUG nova.virt.libvirt.host [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.074 280232 DEBUG nova.virt.libvirt.host [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Searching host: 'np0005546419.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.075 280232 DEBUG nova.virt.libvirt.host [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.076 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.076 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T10:03:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='445199a6-1f73-405e-82f4-8bd8c4bb34c6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-05T10:05:11Z,direct_url=,disk_format='raw',id=f67023e2-353d-4de8-b5e5-7921df600775,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1049485283-shelved',owner='c5f7af554c09472ab3fd27dd1c35c543',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-12-05T10:05:40Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.077 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.077 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.078 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.078 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.078 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.079 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.079 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.080 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.080 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.080 280232 DEBUG nova.virt.hardware [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.081 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.111 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 226 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 5.8 MiB/s wr, 216 op/s Dec 5 05:05:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:05:47 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/329139697' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.570 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.611 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] rbd image 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.616 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:47 localhost ovn_controller[153000]: 2025-12-05T10:05:47Z|00151|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:47 localhost nova_compute[280228]: 2025-12-05 10:05:47.819 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:05:48 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3531840413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.040 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.043 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'pci_devices' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.071 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] End _get_guest_xml xml= Dec 5 05:05:48 localhost nova_compute[280228]: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c Dec 5 05:05:48 localhost nova_compute[280228]: instance-00000006 Dec 5 05:05:48 localhost nova_compute[280228]: 131072 Dec 5 05:05:48 localhost nova_compute[280228]: 1 Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: tempest-UnshelveToHostMultiNodesTest-server-1049485283 Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:47 Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: 128 Dec 5 05:05:48 localhost nova_compute[280228]: 1 Dec 5 05:05:48 localhost nova_compute[280228]: 0 Dec 5 05:05:48 localhost nova_compute[280228]: 0 Dec 5 05:05:48 localhost nova_compute[280228]: 1 Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: tempest-UnshelveToHostMultiNodesTest-1519933804-project-member Dec 5 05:05:48 localhost nova_compute[280228]: tempest-UnshelveToHostMultiNodesTest-1519933804 Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: RDO Dec 5 05:05:48 localhost nova_compute[280228]: OpenStack Compute Dec 5 05:05:48 localhost nova_compute[280228]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 5 05:05:48 localhost nova_compute[280228]: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c Dec 5 05:05:48 localhost nova_compute[280228]: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c Dec 5 05:05:48 localhost nova_compute[280228]: Virtual Machine Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: hvm Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: /dev/urandom Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: Dec 5 05:05:48 localhost nova_compute[280228]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.142 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.142 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.143 280232 INFO nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Using config drive#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.182 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] rbd image 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.206 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.253 280232 DEBUG nova.objects.instance [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lazy-loading 'keypairs' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.322 280232 INFO nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Creating config drive at /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/disk.config#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.325 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpipc7p_kd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.446 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpipc7p_kd" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.495 280232 DEBUG nova.storage.rbd_utils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] rbd image 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.499 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/disk.config 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.706 280232 DEBUG oslo_concurrency.processutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/disk.config 46a0f7d8-3474-49a0-9e40-e9ee2de6864c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.707 280232 INFO nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Deleting local config drive /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c/disk.config because it was imported into RBD.#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.754 280232 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.756 280232 INFO nova.compute.manager [-] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] VM Stopped (Lifecycle Event)#033[00m Dec 5 05:05:48 localhost systemd-machined[83348]: New machine qemu-5-instance-00000006. Dec 5 05:05:48 localhost systemd[1]: Started Virtual Machine qemu-5-instance-00000006. Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.779 280232 DEBUG nova.compute.manager [None req-123a20b6-e33a-447f-b11b-a606c357478f - - - - - -] [instance: fafd8aa9-dbc7-46fd-8fbb-80d1b4923d71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:48 localhost nova_compute[280228]: 2025-12-05 10:05:48.783 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:49 localhost dnsmasq[307415]: exiting on receipt of SIGTERM Dec 5 05:05:49 localhost podman[310901]: 2025-12-05 10:05:49.062489815 +0000 UTC m=+0.051806198 container kill 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:05:49 localhost systemd[1]: libpod-0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65.scope: Deactivated successfully. Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.077 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.078 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] VM Resumed (Lifecycle Event)#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.079 280232 DEBUG nova.compute.manager [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.080 280232 DEBUG nova.virt.libvirt.driver [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.082 280232 INFO nova.virt.libvirt.driver [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Instance spawned successfully.#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.108 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.112 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 05:05:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 226 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 4.9 MiB/s wr, 181 op/s Dec 5 05:05:49 localhost podman[310916]: 2025-12-05 10:05:49.129581431 +0000 UTC m=+0.049729425 container died 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.141 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.141 280232 DEBUG nova.virt.driver [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.141 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] VM Started (Lifecycle Event)#033[00m Dec 5 05:05:49 localhost systemd[1]: tmp-crun.09zXND.mount: Deactivated successfully. Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.158 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.163 280232 DEBUG nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 5 05:05:49 localhost podman[310916]: 2025-12-05 10:05:49.172392403 +0000 UTC m=+0.092540367 container cleanup 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:05:49 localhost systemd[1]: libpod-conmon-0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65.scope: Deactivated successfully. Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.184 280232 INFO nova.compute.manager [None req-99d469d6-622c-4058-8f12-4fdd927c1c3c - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 5 05:05:49 localhost podman[310918]: 2025-12-05 10:05:49.207380686 +0000 UTC m=+0.116396309 container remove 0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b18d5894-62f7-4f8f-a24c-429b8805e981, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:05:49 localhost snmpd[66746]: empty variable list in _query Dec 5 05:05:49 localhost snmpd[66746]: empty variable list in _query Dec 5 05:05:49 localhost snmpd[66746]: empty variable list in _query Dec 5 05:05:49 localhost snmpd[66746]: empty variable list in _query Dec 5 05:05:49 localhost snmpd[66746]: empty variable list in _query Dec 5 05:05:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:49.497 261902 INFO neutron.agent.dhcp.agent [None req-001fc588-95e7-4e84-8d6e-6d26939f8334 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e105 do_prune osdmap full prune enabled Dec 5 05:05:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e106 e106: 6 total, 6 up, 6 in Dec 5 05:05:49 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e106: 6 total, 6 up, 6 in Dec 5 05:05:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:49.648 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:49 localhost podman[239519]: time="2025-12-05T10:05:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:05:49 localhost podman[239519]: @ - - [05/Dec/2025:10:05:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157935 "" "Go-http-client/1.1" Dec 5 05:05:49 localhost podman[239519]: @ - - [05/Dec/2025:10:05:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19731 "" "Go-http-client/1.1" Dec 5 05:05:49 localhost ovn_controller[153000]: 2025-12-05T10:05:49Z|00152|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:49 localhost nova_compute[280228]: 2025-12-05 10:05:49.985 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:05:50.051 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:05:50 localhost systemd[1]: tmp-crun.GuDeQ1.mount: Deactivated successfully. Dec 5 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay-e6b1f7ce21bc67143d9b3a843ba3a1f06176c79b58060d9e0d8354e704be732f-merged.mount: Deactivated successfully. Dec 5 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a5c0e67e93004de353fe75de9cade2a71db629943e4b484e9acc396bfec5d65-userdata-shm.mount: Deactivated successfully. Dec 5 05:05:50 localhost systemd[1]: run-netns-qdhcp\x2db18d5894\x2d62f7\x2d4f8f\x2da24c\x2d429b8805e981.mount: Deactivated successfully. Dec 5 05:05:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:50.149 2 INFO neutron.agent.securitygroups_rpc [req-74ecf99b-e345-4630-ad91-053313f6c446 req-1919fa42-a2d3-4e2c-a14a-fd250addbcf1 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['7dd07b03-f51a-4652-84b7-145d368874a1']#033[00m Dec 5 05:05:50 localhost nova_compute[280228]: 2025-12-05 10:05:50.186 280232 DEBUG nova.compute.manager [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:05:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:50 localhost nova_compute[280228]: 2025-12-05 10:05:50.542 280232 DEBUG oslo_concurrency.lockutils [None req-e24cb07b-d610-4436-9e0d-a4669aacce7f df522977521b495a9d83fb0fc75c609d 1e1bf710e3e34f87981e7550f3cb899f - - default default] Lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 6.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 228 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 142 op/s Dec 5 05:05:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:51.171 2 INFO neutron.agent.securitygroups_rpc [req-23821fcd-f47c-47e8-9ea5-4e64f3e1ab82 req-d1e78378-1930-4a1c-9b70-8746789333ca 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['186820cc-187b-4934-9cac-c70ced43993b']#033[00m Dec 5 05:05:51 localhost nova_compute[280228]: 2025-12-05 10:05:51.692 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.026 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Acquiring lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.026 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.027 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Acquiring lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.027 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.028 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.030 280232 INFO nova.compute.manager [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Terminating instance#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.032 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Acquiring lock "refresh_cache-46a0f7d8-3474-49a0-9e40-e9ee2de6864c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.033 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Acquired lock "refresh_cache-46a0f7d8-3474-49a0-9e40-e9ee2de6864c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.033 280232 DEBUG nova.network.neutron [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.109 280232 DEBUG nova.network.neutron [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.270 280232 DEBUG nova.network.neutron [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.297 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Releasing lock "refresh_cache-46a0f7d8-3474-49a0-9e40-e9ee2de6864c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.301 280232 DEBUG nova.compute.manager [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 5 05:05:52 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Deactivated successfully. Dec 5 05:05:52 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Consumed 3.789s CPU time. Dec 5 05:05:52 localhost systemd-machined[83348]: Machine qemu-5-instance-00000006 terminated. Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.524 280232 INFO nova.virt.libvirt.driver [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Instance destroyed successfully.#033[00m Dec 5 05:05:52 localhost nova_compute[280228]: 2025-12-05 10:05:52.525 280232 DEBUG nova.objects.instance [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lazy-loading 'resources' on Instance uuid 46a0f7d8-3474-49a0-9e40-e9ee2de6864c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:05:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 7.5 MiB/s rd, 5.8 MiB/s wr, 256 op/s Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.209 280232 INFO nova.virt.libvirt.driver [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Deleting instance files /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c_del#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.210 280232 INFO nova.virt.libvirt.driver [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Deletion of /var/lib/nova/instances/46a0f7d8-3474-49a0-9e40-e9ee2de6864c_del complete#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.308 280232 INFO nova.compute.manager [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.309 280232 DEBUG oslo.service.loopingcall [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.309 280232 DEBUG nova.compute.manager [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.309 280232 DEBUG nova.network.neutron [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.343 280232 DEBUG nova.network.neutron [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.358 280232 DEBUG nova.network.neutron [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:05:53 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:53.366 2 INFO neutron.agent.securitygroups_rpc [req-9c27cc3e-5b74-43d9-a2de-714df95f5d49 req-2dc71e88-498d-40ef-8ade-7140861a565f 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['b81ee3fd-eca2-47b4-8eaa-4f630bc76eb0']#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.375 280232 INFO nova.compute.manager [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Took 0.07 seconds to deallocate network for instance.#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.487 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.488 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.614 280232 DEBUG oslo_concurrency.processutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:05:53 localhost nova_compute[280228]: 2025-12-05 10:05:53.787 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:05:54 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/125528686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:05:54 localhost nova_compute[280228]: 2025-12-05 10:05:54.029 280232 DEBUG oslo_concurrency.processutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:05:54 localhost nova_compute[280228]: 2025-12-05 10:05:54.037 280232 DEBUG nova.compute.provider_tree [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:05:54 localhost nova_compute[280228]: 2025-12-05 10:05:54.054 280232 DEBUG nova.scheduler.client.report [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:05:54 localhost nova_compute[280228]: 2025-12-05 10:05:54.076 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:05:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:05:54 localhost nova_compute[280228]: 2025-12-05 10:05:54.111 280232 INFO nova.scheduler.client.report [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Deleted allocations for instance 46a0f7d8-3474-49a0-9e40-e9ee2de6864c#033[00m Dec 5 05:05:54 localhost nova_compute[280228]: 2025-12-05 10:05:54.182 280232 DEBUG oslo_concurrency.lockutils [None req-0560c3aa-446e-4c38-9e4b-75688af84bf7 9f7345c9119f469e9da2826f2afea07b c5f7af554c09472ab3fd27dd1c35c543 - - default default] Lock "46a0f7d8-3474-49a0-9e40-e9ee2de6864c" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:05:54 localhost podman[310990]: 2025-12-05 10:05:54.211719311 +0000 UTC m=+0.101251494 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS) Dec 5 05:05:54 localhost podman[310990]: 2025-12-05 10:05:54.250225081 +0000 UTC m=+0.139757184 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 5 05:05:54 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:05:54 localhost podman[310991]: 2025-12-05 10:05:54.250655994 +0000 UTC m=+0.133515013 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm) Dec 5 05:05:54 localhost podman[310991]: 2025-12-05 10:05:54.330838612 +0000 UTC m=+0.213697581 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350) Dec 5 05:05:54 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:05:55 localhost ovn_controller[153000]: 2025-12-05T10:05:55Z|00153|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:05:55 localhost nova_compute[280228]: 2025-12-05 10:05:55.073 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 226 MiB data, 881 MiB used, 41 GiB / 42 GiB avail; 6.2 MiB/s rd, 4.8 MiB/s wr, 212 op/s Dec 5 05:05:55 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:55.237 2 INFO neutron.agent.securitygroups_rpc [req-24e84f8d-6e8f-4557-9205-54b3029f5245 req-092162e9-9b4d-4423-bd91-0f1f4f4310ec 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['b94edc62-bf5e-4c16-aafe-b39cd05a1a10']#033[00m Dec 5 05:05:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:05:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e106 do_prune osdmap full prune enabled Dec 5 05:05:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e107 e107: 6 total, 6 up, 6 in Dec 5 05:05:55 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e107: 6 total, 6 up, 6 in Dec 5 05:05:56 localhost nova_compute[280228]: 2025-12-05 10:05:56.694 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:05:57 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:05:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:05:57 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:05:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:05:57 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:05:57 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 812786f7-fca3-404f-b591-9ebce6c22205 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:05:57 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 812786f7-fca3-404f-b591-9ebce6c22205 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:05:57 localhost ceph-mgr[286454]: [progress INFO root] Completed event 812786f7-fca3-404f-b591-9ebce6c22205 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:05:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:05:57 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:05:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 5.8 MiB/s wr, 247 op/s Dec 5 05:05:57 localhost openstack_network_exporter[241668]: ERROR 10:05:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:05:57 localhost openstack_network_exporter[241668]: ERROR 10:05:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:05:57 localhost openstack_network_exporter[241668]: ERROR 10:05:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:05:57 localhost openstack_network_exporter[241668]: ERROR 10:05:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:05:57 localhost openstack_network_exporter[241668]: Dec 5 05:05:57 localhost openstack_network_exporter[241668]: ERROR 10:05:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:05:57 localhost openstack_network_exporter[241668]: Dec 5 05:05:57 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:05:57 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:05:57 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:57.573 2 INFO neutron.agent.securitygroups_rpc [req-025a97c7-a464-4a3a-86c1-61297e119814 req-de42a876-d25d-468b-a0f3-d33f17b567b7 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['32984ea2-4ec9-4d1b-aa2d-d8f6901ec630']#033[00m Dec 5 05:05:58 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:58.475 2 INFO neutron.agent.securitygroups_rpc [req-28b1e81a-aa3a-4892-8d38-c4574c829b60 req-fdb05b8b-d1da-458f-885e-126d06eb03b9 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['32984ea2-4ec9-4d1b-aa2d-d8f6901ec630']#033[00m Dec 5 05:05:58 localhost nova_compute[280228]: 2025-12-05 10:05:58.790 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:05:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 6.8 MiB/s rd, 4.9 MiB/s wr, 206 op/s Dec 5 05:05:59 localhost neutron_sriov_agent[254996]: 2025-12-05 10:05:59.505 2 INFO neutron.agent.securitygroups_rpc [req-41735c34-10dd-4511-8a89-ab34c2aff62e req-23ce0828-96f4-4e84-afaa-2db1869af464 631dd2c0d11840bdbd27f1582d85d8f8 784b8d7dafc84eb8ac5fe2c56cc5f693 - - default default] Security group rule updated ['32984ea2-4ec9-4d1b-aa2d-d8f6901ec630']#033[00m Dec 5 05:06:00 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:06:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:06:00 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 158 op/s Dec 5 05:06:01 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:01 localhost nova_compute[280228]: 2025-12-05 10:06:01.697 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 67 op/s Dec 5 05:06:03 localhost dnsmasq[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/addn_hosts - 0 addresses Dec 5 05:06:03 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/host Dec 5 05:06:03 localhost dnsmasq-dhcp[309931]: read /var/lib/neutron/dhcp/6889f115-bc05-4b19-b48d-aa4fe2a1d137/opts Dec 5 05:06:03 localhost podman[311131]: 2025-12-05 10:06:03.783689957 +0000 UTC m=+0.059169425 container kill f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:06:03 localhost nova_compute[280228]: 2025-12-05 10:06:03.792 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.912 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.912 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.912 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:06:03 localhost ovn_controller[153000]: 2025-12-05T10:06:03Z|00154|binding|INFO|Releasing lport b62e996e-07b4-4ca0-8bed-ea9631c1cb21 from this chassis (sb_readonly=0) Dec 5 05:06:03 localhost ovn_controller[153000]: 2025-12-05T10:06:03Z|00155|binding|INFO|Setting lport b62e996e-07b4-4ca0-8bed-ea9631c1cb21 down in Southbound Dec 5 05:06:03 localhost nova_compute[280228]: 2025-12-05 10:06:03.968 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.977 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-6889f115-bc05-4b19-b48d-aa4fe2a1d137', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6889f115-bc05-4b19-b48d-aa4fe2a1d137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '784b8d7dafc84eb8ac5fe2c56cc5f693', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2361dd1e-7145-43ae-8794-cf25eabfc4ba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b62e996e-07b4-4ca0-8bed-ea9631c1cb21) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:06:03 localhost nova_compute[280228]: 2025-12-05 10:06:03.978 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:03 localhost kernel: device tapb62e996e-07 left promiscuous mode Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.980 158820 INFO neutron.agent.ovn.metadata.agent [-] Port b62e996e-07b4-4ca0-8bed-ea9631c1cb21 in datapath 6889f115-bc05-4b19-b48d-aa4fe2a1d137 unbound from our chassis#033[00m Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.982 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6889f115-bc05-4b19-b48d-aa4fe2a1d137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:06:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:03.983 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9fffb0-17f3-424d-ae8d-db0e608022d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:06:03 localhost nova_compute[280228]: 2025-12-05 10:06:03.991 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:05 localhost ovn_controller[153000]: 2025-12-05T10:06:05Z|00156|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:06:05 localhost nova_compute[280228]: 2025-12-05 10:06:05.111 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.4 KiB/s wr, 67 op/s Dec 5 05:06:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:05 localhost dnsmasq[309931]: exiting on receipt of SIGTERM Dec 5 05:06:05 localhost systemd[1]: libpod-f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428.scope: Deactivated successfully. Dec 5 05:06:05 localhost podman[311169]: 2025-12-05 10:06:05.637923657 +0000 UTC m=+0.062032612 container kill f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:06:05 localhost podman[311181]: 2025-12-05 10:06:05.700956528 +0000 UTC m=+0.052519120 container died f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:06:05 localhost podman[311181]: 2025-12-05 10:06:05.732857147 +0000 UTC m=+0.084419689 container cleanup f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:06:05 localhost systemd[1]: libpod-conmon-f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428.scope: Deactivated successfully. Dec 5 05:06:05 localhost podman[311188]: 2025-12-05 10:06:05.753890291 +0000 UTC m=+0.092338221 container remove f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6889f115-bc05-4b19-b48d-aa4fe2a1d137, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:06:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:05.783 261902 INFO neutron.agent.dhcp.agent [None req-ea544756-68a3-48f8-b94e-e30b1817a841 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:05.893 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:06 localhost systemd[1]: var-lib-containers-storage-overlay-fe3337e7002dc0679805eb9951900ec135fdd2582d7032fc02be99e09f7b8082-merged.mount: Deactivated successfully. Dec 5 05:06:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4d2860fe4f855ab0b16d28aa3f4d08d7d6e1965b7a6d87211fe6e258164d428-userdata-shm.mount: Deactivated successfully. Dec 5 05:06:06 localhost systemd[1]: run-netns-qdhcp\x2d6889f115\x2dbc05\x2d4b19\x2db48d\x2daa4fe2a1d137.mount: Deactivated successfully. Dec 5 05:06:06 localhost nova_compute[280228]: 2025-12-05 10:06:06.700 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail; 933 KiB/s rd, 1.2 KiB/s wr, 57 op/s Dec 5 05:06:07 localhost nova_compute[280228]: 2025-12-05 10:06:07.523 280232 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 5 05:06:07 localhost nova_compute[280228]: 2025-12-05 10:06:07.524 280232 INFO nova.compute.manager [-] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] VM Stopped (Lifecycle Event)#033[00m Dec 5 05:06:07 localhost nova_compute[280228]: 2025-12-05 10:06:07.549 280232 DEBUG nova.compute.manager [None req-debcfd2b-e282-45a5-a0be-c405dc5e4a96 - - - - - -] [instance: 46a0f7d8-3474-49a0-9e40-e9ee2de6864c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 5 05:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:06:08 localhost podman[311212]: 2025-12-05 10:06:08.207935962 +0000 UTC m=+0.090222946 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:06:08 localhost podman[311212]: 2025-12-05 10:06:08.220334903 +0000 UTC m=+0.102621877 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:06:08 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:06:08 localhost podman[311214]: 2025-12-05 10:06:08.312719204 +0000 UTC m=+0.188405355 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:06:08 localhost podman[311214]: 2025-12-05 10:06:08.356054752 +0000 UTC m=+0.231740873 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 05:06:08 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:06:08 localhost podman[311213]: 2025-12-05 10:06:08.36119357 +0000 UTC m=+0.239875073 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:06:08 localhost podman[311213]: 2025-12-05 10:06:08.440935064 +0000 UTC m=+0.319616527 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:06:08 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:06:08 localhost nova_compute[280228]: 2025-12-05 10:06:08.828 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e107 do_prune osdmap full prune enabled Dec 5 05:06:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e108 e108: 6 total, 6 up, 6 in Dec 5 05:06:08 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e108: 6 total, 6 up, 6 in Dec 5 05:06:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 145 MiB data, 741 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:06:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2975 writes, 25K keys, 2975 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.08 MB/s#012Cumulative WAL: 2975 writes, 2975 syncs, 1.00 writes per sync, written: 0.04 GB, 0.08 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2975 writes, 25K keys, 2975 commit groups, 1.0 writes per commit group, ingest: 45.87 MB, 0.08 MB/s#012Interval WAL: 2975 writes, 2975 syncs, 1.00 writes per sync, written: 0.04 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 140.2 0.24 0.08 11 0.022 0 0 0.0 0.0#012 L6 1/0 17.00 MB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 5.0 150.9 137.2 1.23 0.42 10 0.123 116K 5037 0.0 0.0#012 Sum 1/0 17.00 MB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 6.0 126.2 137.7 1.47 0.50 21 0.070 116K 5037 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 6.0 126.4 137.9 1.47 0.50 20 0.074 116K 5037 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 0.0 150.9 137.2 1.23 0.42 10 0.123 116K 5037 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 141.5 0.24 0.08 10 0.024 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.033#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.34 MB/s write, 0.18 GB read, 0.31 MB/s read, 1.5 seconds#012Interval compaction: 0.20 GB write, 0.34 MB/s write, 0.18 GB read, 0.31 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56443b711350#2 capacity: 308.00 MB usage: 31.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000326 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2020,30.87 MB,10.0213%) FilterBlock(21,345.98 KB,0.1097%) IndexBlock(21,440.55 KB,0.139682%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 5 05:06:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e108 do_prune osdmap full prune enabled Dec 5 05:06:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e109 e109: 6 total, 6 up, 6 in Dec 5 05:06:09 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e109: 6 total, 6 up, 6 in Dec 5 05:06:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e109 do_prune osdmap full prune enabled Dec 5 05:06:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e110 e110: 6 total, 6 up, 6 in Dec 5 05:06:10 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e110: 6 total, 6 up, 6 in Dec 5 05:06:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 6.2 KiB/s wr, 47 op/s Dec 5 05:06:11 localhost nova_compute[280228]: 2025-12-05 10:06:11.702 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e110 do_prune osdmap full prune enabled Dec 5 05:06:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e111 e111: 6 total, 6 up, 6 in Dec 5 05:06:11 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e111: 6 total, 6 up, 6 in Dec 5 05:06:12 localhost nova_compute[280228]: 2025-12-05 10:06:12.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e111 do_prune osdmap full prune enabled Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.952 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.953 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:06:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e112 e112: 6 total, 6 up, 6 in Dec 5 05:06:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e112: 6 total, 6 up, 6 in Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.983 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.984 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4eb775b0-d039-4c3f-95cb-8ada3b0d3dd8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:12.953283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07996956-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '785ebaacb02aefb3d44ae0a1db176cf44bd085881e64610041dbeb316898e037'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:12.953283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07997dce-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '38ad7c2d189896e2dacba04ae352cbc07f320834321c222700a38a3888faf20d'}]}, 'timestamp': '2025-12-05 10:06:12.984789', '_unique_id': '6e5e715d445142d0b3f0cb4deabaaf3d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.986 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.987 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:06:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:12.999 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e055e3b-c640-4f2c-82bd-b12d73b87286', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:12.988042', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '079bd4ac-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.162371362, 'message_signature': '947252dedc7c1dcc5b94bbfcd9856c900ded3bc6a04a068ca7e0f2281d27ccf3'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:12.988042', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '079bf19e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.162371362, 'message_signature': '41db89f4e84265d49b30fa99f79d724963a74049d02684f71bce64be479f605f'}]}, 'timestamp': '2025-12-05 10:06:13.001044', '_unique_id': 'e4b6db22018643aa97e95c47fa950f3e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.003 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.003 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.004 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.004 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.004 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '798e6e36-346c-475e-ad78-2a60a39929db', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.004206', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '079c8884-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.162371362, 'message_signature': 'cae5936a1bbb226893a483b389c466d9408a449754beeb11169101d10371e006'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.004206', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '079c99b4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.162371362, 'message_signature': '4b0f852400f6317487d6cb6cf29e0c603b2ba0f1f31d0a9f90b1d6df17d3799f'}]}, 'timestamp': '2025-12-05 10:06:13.005201', '_unique_id': '8f8adb8fe437497abff13150fcddb97e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.006 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.007 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.011 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd25fd2c9-fafd-4d6f-b9b9-01d90386e3cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.007804', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '079d9b2a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '4c99491966ea9245d2c56f814d6f28126d58b797d9d7cf31e42ed2bc619e8ab8'}]}, 'timestamp': '2025-12-05 10:06:13.011789', '_unique_id': '50da6818a52a4ed5b70f8e01545793f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.013 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.014 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.014 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 15620000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fcbf0b9a-73bd-48e4-94a1-50a3d53b308b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15620000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:06:13.015077', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '07a088c6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.204125162, 'message_signature': '3be9f15705dde605c60f11e5b3b72ab5a74c981d689fce0bdf8c1799a8f80710'}]}, 'timestamp': '2025-12-05 10:06:13.031089', '_unique_id': '900e1f8177a9496b98feab79f2a62c3a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.034 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.034 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd58863ce-26a4-4b8a-830d-36271f620dd9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.034075', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07a11b74-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': 'fcff9a14bcd7f23200c87252e47c9e6fbb0fc0423af7a555ead88916d1aca565'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.034075', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07a12e98-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': 'f2c5a110c421e61253f36aa399cae79de3a7394e5602d570b514d28338b7fa56'}]}, 'timestamp': '2025-12-05 10:06:13.035178', '_unique_id': '5e86c3beefc9429394f6e04f50a09308'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.037 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.038 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5a7cf2d8-8b7d-4902-9e0b-c19b3f1a9736', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.037555', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07a19c66-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '1bc94be4c18366417f1e005e322da7bce4565b531d9ba433e8f88aef2d78ea62'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.037555', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07a1ac24-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '151e9973ac14d4d71230c38ceca74d11d5faedecb494054bd255aa0abbfe1fbe'}]}, 'timestamp': '2025-12-05 10:06:13.038418', '_unique_id': 'f1fdc80188934691acfb2b74f2d9ee4c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.040 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'be3509c8-da4b-47c9-a5ef-942caf177115', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.040996', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07a227da-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '3c6dedf81df13141eb8ecf4c6c3ed45db54cf8d80565ff49bfc11f703bd77733'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.040996', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07a23a18-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '3031e0acb8f66abcddd8509847fd9c952346ccc8052a7edc7364cd4d9318d532'}]}, 'timestamp': '2025-12-05 10:06:13.042024', '_unique_id': '96f2393b06524812904513cf165c96d8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.044 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '258b0a23-508e-48af-bd41-d361d4675162', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.044613', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a2b466-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': 'af09ebe524202c2831865d8dcaf400dcc919955f15b1bb6d79f6a594b2274b24'}]}, 'timestamp': '2025-12-05 10:06:13.045311', '_unique_id': '64d2a14ff14e4d50ab68b0a5fe11f652'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '73ab2ae1-cb0c-4997-a8a3-bd3f03d0d51b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.048009', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a3386e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '32e09037d68535b179fb2132f7d80b02934b0722e098d72bcdca84ebb53c0d89'}]}, 'timestamp': '2025-12-05 10:06:13.048573', '_unique_id': '714a8fc8e48d49a98a1502f3ae791395'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.050 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.050 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c79d6ced-ce8b-444e-af94-4636adcb01a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:06:13.050946', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '07a3a790-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.204125162, 'message_signature': 'a1f82bd3415c54f80a5d237901e3a7e4f5b9e58cf8c7faa226e3c8f220da3038'}]}, 'timestamp': '2025-12-05 10:06:13.051530', '_unique_id': 'e577ad11bd2147e5baacb11fdcffd249'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.053 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.054 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.054 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec8b0e06-c82e-4e72-ae06-4891f2fe7db0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.054113', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07a42594-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': 'bbd1c4c6e5729def7923e91412a62c20cfe72a4017c68eb39c7dc7d15d19d681'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.054113', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07a4380e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': 'c2302ba356150c259282b99ad5f9d2e596c566feb4e5842089396d85256d7032'}]}, 'timestamp': '2025-12-05 10:06:13.055084', '_unique_id': '7cebbdd7f809499293122a5dd60e08b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5ca61417-866f-4d07-a1af-91187e1b7dbe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.057590', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a4ac44-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '4ff9dfe640ead95e8e4bd91d52d9d3525a351facd942c6a3d7b4485c978a74a4'}]}, 'timestamp': '2025-12-05 10:06:13.058093', '_unique_id': '19de17d2fb8741caa4ba86210a7a0652'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.060 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a894f9d6-3b2b-4c0a-9621-a202a4b1d602', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.060737', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a5275a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': 'e65f53285b46cacde3db83ee3c2c86e6401b1c206129d69286e381a3b85a8296'}]}, 'timestamp': '2025-12-05 10:06:13.061513', '_unique_id': '567cc3157ae64f459f24ee7a22375fc1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.063 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.063 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bebea214-444b-44e4-bd40-a35f456456c9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.063318', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a58aa6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': 'd462aa8cd5e155b846f1c63d4a3c78771031e16e6192fff102539a2396dd7bdd'}]}, 'timestamp': '2025-12-05 10:06:13.063705', '_unique_id': '9af445faa80343fbab34c9168ac0173b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.065 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.065 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '32b14084-12ff-4bb8-a5e7-d4a3c46556c5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.065179', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a5d2f4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '1d691dd9dedc8dc67cfb1cea5fad4c23b76d71e76057732703a1874fd603bc62'}]}, 'timestamp': '2025-12-05 10:06:13.065558', '_unique_id': 'c72d273c698c4299b556c7a5674a2618'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'feb9f934-377d-4430-8f3d-f83037228c20', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.067006', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07a617fa-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': 'cbf7ab5334e5ce004191f7f3dd8283851fe4a71ff1430a8d4242623b46981ecb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.067006', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07a6266e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.127586205, 'message_signature': '8fd547d9c55937c943e24c0a842d98b272ac461372b414fb7ee5b4909017e626'}]}, 'timestamp': '2025-12-05 10:06:13.067697', '_unique_id': '3027c6e2133d4b7da28c9b1135d5fb5b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.069 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16b64d52-223f-4656-8d28-cbb45dc35004', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.069207', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a66f3e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '0236285d6c4955727a0eedda1208c09365fa8dcba80891531577bf6175ddfe1d'}]}, 'timestamp': '2025-12-05 10:06:13.069590', '_unique_id': '43e7c45bc04c4013865271d4f8d71542'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.070 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '530e63c0-ee78-4804-8b3f-bf2401826209', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:06:13.071236', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '07a6c20e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.162371362, 'message_signature': 'e5d8e7d37836347841d7e33574fc5b29f6233d0d1ac23a953d36c59f2c1e1f7a'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:06:13.071236', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '07a6ceb6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.162371362, 'message_signature': 'd476d6ecd2ecd52a859cbf44062feef64bfb1046db5774e7c4f83eac620f2ed8'}]}, 'timestamp': '2025-12-05 10:06:13.072015', '_unique_id': '93d18c22122b4cb78942782aef0d4a63'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.073 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.073 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af4c7659-6255-492f-ba33-988581f3d34b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.073887', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a7260e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '03554d4d9df8665e11c12e90c943a456bddaf979cc831a1c3b1371e2bbf7d2dd'}]}, 'timestamp': '2025-12-05 10:06:13.074235', '_unique_id': 'fdd5aae18a744bee9321d1c239976545'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.075 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1696ddf4-4415-4184-a916-f0cf2b8c6d6b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:06:13.076180', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '07a7845a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12288.182125867, 'message_signature': '7552d0f546b2062f84a5105fd44aa371ec14e26be983cf09e6127e3d72150a88'}]}, 'timestamp': '2025-12-05 10:06:13.076648', '_unique_id': 'd1fe547acafb462cbe9a0546fe637675'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:06:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:06:13.077 12 ERROR oslo_messaging.notify.messaging Dec 5 05:06:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 13 KiB/s wr, 112 op/s Dec 5 05:06:13 localhost nova_compute[280228]: 2025-12-05 10:06:13.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:13 localhost nova_compute[280228]: 2025-12-05 10:06:13.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:06:13 localhost nova_compute[280228]: 2025-12-05 10:06:13.563 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Dec 5 05:06:13 localhost nova_compute[280228]: 2025-12-05 10:06:13.831 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e112 do_prune osdmap full prune enabled Dec 5 05:06:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e113 e113: 6 total, 6 up, 6 in Dec 5 05:06:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e113: 6 total, 6 up, 6 in Dec 5 05:06:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:06:14 localhost podman[311272]: 2025-12-05 10:06:14.403807173 +0000 UTC m=+0.084607938 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:06:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:06:14 localhost podman[311272]: 2025-12-05 10:06:14.413653465 +0000 UTC m=+0.094454230 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:06:14 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:06:14 localhost systemd[1]: tmp-crun.HruoL9.mount: Deactivated successfully. Dec 5 05:06:14 localhost podman[311295]: 2025-12-05 10:06:14.50406838 +0000 UTC m=+0.078622585 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:06:14 localhost podman[311295]: 2025-12-05 10:06:14.567617644 +0000 UTC m=+0.142171839 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:06:14 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:06:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 13 KiB/s wr, 106 op/s Dec 5 05:06:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:06:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:06:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:06:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:06:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:06:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.232 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.527 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.527 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.528 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:06:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:06:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1947143728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:06:15 localhost nova_compute[280228]: 2025-12-05 10:06:15.988 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:06:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e113 do_prune osdmap full prune enabled Dec 5 05:06:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e114 e114: 6 total, 6 up, 6 in Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.058 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.059 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:06:16 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e114: 6 total, 6 up, 6 in Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.288 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.291 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11228MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.291 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.293 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:06:16 localhost nova_compute[280228]: 2025-12-05 10:06:16.705 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 18 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 149 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 132 KiB/s rd, 20 KiB/s wr, 187 op/s Dec 5 05:06:18 localhost nova_compute[280228]: 2025-12-05 10:06:18.665 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:06:18 localhost nova_compute[280228]: 2025-12-05 10:06:18.666 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:06:18 localhost nova_compute[280228]: 2025-12-05 10:06:18.666 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:06:18 localhost nova_compute[280228]: 2025-12-05 10:06:18.833 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:18 localhost nova_compute[280228]: 2025-12-05 10:06:18.943 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:06:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 18 active+clean+snaptrim_wait, 10 active+clean+snaptrim, 149 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 112 KiB/s rd, 17 KiB/s wr, 158 op/s Dec 5 05:06:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:06:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/139994625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:06:19 localhost nova_compute[280228]: 2025-12-05 10:06:19.407 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:06:19 localhost nova_compute[280228]: 2025-12-05 10:06:19.416 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:06:19 localhost nova_compute[280228]: 2025-12-05 10:06:19.446 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:06:19 localhost nova_compute[280228]: 2025-12-05 10:06:19.468 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:06:19 localhost nova_compute[280228]: 2025-12-05 10:06:19.469 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:06:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e114 do_prune osdmap full prune enabled Dec 5 05:06:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e115 e115: 6 total, 6 up, 6 in Dec 5 05:06:19 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e115: 6 total, 6 up, 6 in Dec 5 05:06:19 localhost podman[239519]: time="2025-12-05T10:06:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:06:19 localhost podman[239519]: @ - - [05/Dec/2025:10:06:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:06:19 localhost podman[239519]: @ - - [05/Dec/2025:10:06:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19254 "" "Go-http-client/1.1" Dec 5 05:06:20 localhost sshd[311366]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:06:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e115 do_prune osdmap full prune enabled Dec 5 05:06:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e116 e116: 6 total, 6 up, 6 in Dec 5 05:06:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e116: 6 total, 6 up, 6 in Dec 5 05:06:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 173 active+clean; 145 MiB data, 742 MiB used, 41 GiB / 42 GiB avail; 177 KiB/s rd, 23 KiB/s wr, 251 op/s Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.465 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.465 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.466 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.466 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.466 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.467 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.467 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:06:21 localhost nova_compute[280228]: 2025-12-05 10:06:21.709 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:22.845 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:06:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:22.846 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:06:22 localhost nova_compute[280228]: 2025-12-05 10:06:22.879 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e116 do_prune osdmap full prune enabled Dec 5 05:06:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 176 KiB/s rd, 21 KiB/s wr, 246 op/s Dec 5 05:06:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e117 e117: 6 total, 6 up, 6 in Dec 5 05:06:23 localhost nova_compute[280228]: 2025-12-05 10:06:23.164 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:23 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e117: 6 total, 6 up, 6 in Dec 5 05:06:23 localhost nova_compute[280228]: 2025-12-05 10:06:23.835 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e117 do_prune osdmap full prune enabled Dec 5 05:06:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e118 e118: 6 total, 6 up, 6 in Dec 5 05:06:24 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e118: 6 total, 6 up, 6 in Dec 5 05:06:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:06:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:06:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 7.7 KiB/s wr, 137 op/s Dec 5 05:06:25 localhost podman[311368]: 2025-12-05 10:06:25.205452453 +0000 UTC m=+0.089680524 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:06:25 localhost podman[311368]: 2025-12-05 10:06:25.223624368 +0000 UTC m=+0.107852439 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:06:25 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:06:25 localhost systemd[1]: tmp-crun.mtnhWt.mount: Deactivated successfully. Dec 5 05:06:25 localhost podman[311369]: 2025-12-05 10:06:25.314471077 +0000 UTC m=+0.195051107 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6) Dec 5 05:06:25 localhost podman[311369]: 2025-12-05 10:06:25.327208876 +0000 UTC m=+0.207788926 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git) Dec 5 05:06:25 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:06:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e118 do_prune osdmap full prune enabled Dec 5 05:06:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e119 e119: 6 total, 6 up, 6 in Dec 5 05:06:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e119: 6 total, 6 up, 6 in Dec 5 05:06:26 localhost nova_compute[280228]: 2025-12-05 10:06:26.713 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:26 localhost ovn_controller[153000]: 2025-12-05T10:06:26Z|00157|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:06:26 localhost nova_compute[280228]: 2025-12-05 10:06:26.822 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:26.849 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:06:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 225 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 147 KiB/s rd, 13 MiB/s wr, 203 op/s Dec 5 05:06:27 localhost openstack_network_exporter[241668]: ERROR 10:06:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:06:27 localhost openstack_network_exporter[241668]: ERROR 10:06:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:06:27 localhost openstack_network_exporter[241668]: ERROR 10:06:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:06:27 localhost openstack_network_exporter[241668]: ERROR 10:06:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:06:27 localhost openstack_network_exporter[241668]: Dec 5 05:06:27 localhost openstack_network_exporter[241668]: ERROR 10:06:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:06:27 localhost openstack_network_exporter[241668]: Dec 5 05:06:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e119 do_prune osdmap full prune enabled Dec 5 05:06:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e120 e120: 6 total, 6 up, 6 in Dec 5 05:06:27 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e120: 6 total, 6 up, 6 in Dec 5 05:06:28 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:28.516 261902 INFO neutron.agent.linux.ip_lib [None req-b26928fc-ab33-43ca-96cd-549f427689cc - - - - - -] Device tapc640056b-d8 cannot be used as it has no MAC address#033[00m Dec 5 05:06:28 localhost nova_compute[280228]: 2025-12-05 10:06:28.539 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:28 localhost kernel: device tapc640056b-d8 entered promiscuous mode Dec 5 05:06:28 localhost ovn_controller[153000]: 2025-12-05T10:06:28Z|00158|binding|INFO|Claiming lport c640056b-d8ef-4efb-9518-ecda84d9531d for this chassis. Dec 5 05:06:28 localhost nova_compute[280228]: 2025-12-05 10:06:28.549 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:28 localhost ovn_controller[153000]: 2025-12-05T10:06:28Z|00159|binding|INFO|c640056b-d8ef-4efb-9518-ecda84d9531d: Claiming unknown Dec 5 05:06:28 localhost NetworkManager[5960]: [1764929188.5506] manager: (tapc640056b-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Dec 5 05:06:28 localhost systemd-udevd[311417]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:06:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:28.564 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b56bb63fac1149dd90331f94de840152', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3153bdaa-2350-4c83-adb1-45bb4beb199d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c640056b-d8ef-4efb-9518-ecda84d9531d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:06:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:28.567 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c640056b-d8ef-4efb-9518-ecda84d9531d in datapath 90590255-dcbf-4ecb-8fcc-6e0ba44da1cd bound to our chassis#033[00m Dec 5 05:06:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e120 do_prune osdmap full prune enabled Dec 5 05:06:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:28.570 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1e0b5fc3-57cf-410f-92bf-a501ce029ebc IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:06:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:28.571 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:06:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:28.572 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7382ba13-243c-4a56-a7f4-3c748357f47b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:06:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e121 e121: 6 total, 6 up, 6 in Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost ovn_controller[153000]: 2025-12-05T10:06:28Z|00160|binding|INFO|Setting lport c640056b-d8ef-4efb-9518-ecda84d9531d ovn-installed in OVS Dec 5 05:06:28 localhost ovn_controller[153000]: 2025-12-05T10:06:28Z|00161|binding|INFO|Setting lport c640056b-d8ef-4efb-9518-ecda84d9531d up in Southbound Dec 5 05:06:28 localhost nova_compute[280228]: 2025-12-05 10:06:28.590 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:28 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e121: 6 total, 6 up, 6 in Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost journal[228791]: ethtool ioctl error on tapc640056b-d8: No such device Dec 5 05:06:28 localhost nova_compute[280228]: 2025-12-05 10:06:28.620 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:28 localhost nova_compute[280228]: 2025-12-05 10:06:28.646 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:28 localhost nova_compute[280228]: 2025-12-05 10:06:28.837 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 225 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 16 MiB/s wr, 198 op/s Dec 5 05:06:29 localhost podman[311488]: Dec 5 05:06:29 localhost podman[311488]: 2025-12-05 10:06:29.54746754 +0000 UTC m=+0.085999112 container create fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:06:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e121 do_prune osdmap full prune enabled Dec 5 05:06:29 localhost systemd[1]: Started libpod-conmon-fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991.scope. Dec 5 05:06:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e122 e122: 6 total, 6 up, 6 in Dec 5 05:06:29 localhost systemd[1]: Started libcrun container. Dec 5 05:06:29 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e122: 6 total, 6 up, 6 in Dec 5 05:06:29 localhost podman[311488]: 2025-12-05 10:06:29.504934019 +0000 UTC m=+0.043465631 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:06:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebcce9f7689d51a10f06c9b2e0a22da85d8979132846169ddd842f06e7d89a9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:06:29 localhost podman[311488]: 2025-12-05 10:06:29.61775795 +0000 UTC m=+0.156289512 container init fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 5 05:06:29 localhost podman[311488]: 2025-12-05 10:06:29.626216579 +0000 UTC m=+0.164748151 container start fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:06:29 localhost dnsmasq[311505]: started, version 2.85 cachesize 150 Dec 5 05:06:29 localhost dnsmasq[311505]: DNS service limited to local subnets Dec 5 05:06:29 localhost dnsmasq[311505]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:06:29 localhost dnsmasq[311505]: warning: no upstream servers configured Dec 5 05:06:29 localhost dnsmasq-dhcp[311505]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:06:29 localhost dnsmasq[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/addn_hosts - 0 addresses Dec 5 05:06:29 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/host Dec 5 05:06:29 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/opts Dec 5 05:06:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:29.865 261902 INFO neutron.agent.dhcp.agent [None req-46f4b5ec-931b-45e1-a75d-85c378dcf722 - - - - - -] DHCP configuration for ports {'e716c249-278e-4ea6-9aad-fdd077b012ec'} is completed#033[00m Dec 5 05:06:30 localhost nova_compute[280228]: 2025-12-05 10:06:30.199 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e122 do_prune osdmap full prune enabled Dec 5 05:06:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e123 e123: 6 total, 6 up, 6 in Dec 5 05:06:30 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e123: 6 total, 6 up, 6 in Dec 5 05:06:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 161 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 8.0 MiB/s wr, 143 op/s Dec 5 05:06:31 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:31.502 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:30Z, description=, device_id=8492223c-b143-46a6-9ec7-c3db3047850f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bf23bba8-fac7-4815-9663-190d300f0b9a, ip_allocation=immediate, mac_address=fa:16:3e:c4:fc:48, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:25Z, description=, dns_domain=, id=90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-2017071977-network, port_security_enabled=True, project_id=b56bb63fac1149dd90331f94de840152, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8527, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=891, status=ACTIVE, subnets=['22ded562-fbb3-4edc-9697-33ec9a5064cf'], tags=[], tenant_id=b56bb63fac1149dd90331f94de840152, updated_at=2025-12-05T10:06:26Z, vlan_transparent=None, network_id=90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, port_security_enabled=False, project_id=b56bb63fac1149dd90331f94de840152, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=922, status=DOWN, tags=[], tenant_id=b56bb63fac1149dd90331f94de840152, updated_at=2025-12-05T10:06:31Z on network 90590255-dcbf-4ecb-8fcc-6e0ba44da1cd#033[00m Dec 5 05:06:31 localhost podman[311523]: 2025-12-05 10:06:31.716032701 +0000 UTC m=+0.058903773 container kill fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:06:31 localhost dnsmasq[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/addn_hosts - 1 addresses Dec 5 05:06:31 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/host Dec 5 05:06:31 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/opts Dec 5 05:06:31 localhost nova_compute[280228]: 2025-12-05 10:06:31.744 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:31 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:31.962 261902 INFO neutron.agent.dhcp.agent [None req-d31d08d7-dc23-4b2b-881f-95a9f1f79d57 - - - - - -] DHCP configuration for ports {'bf23bba8-fac7-4815-9663-190d300f0b9a'} is completed#033[00m Dec 5 05:06:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 5.7 MiB/s wr, 110 op/s Dec 5 05:06:33 localhost nova_compute[280228]: 2025-12-05 10:06:33.839 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:34 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:34.728 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:06:30Z, description=, device_id=8492223c-b143-46a6-9ec7-c3db3047850f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bf23bba8-fac7-4815-9663-190d300f0b9a, ip_allocation=immediate, mac_address=fa:16:3e:c4:fc:48, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:25Z, description=, dns_domain=, id=90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-2017071977-network, port_security_enabled=True, project_id=b56bb63fac1149dd90331f94de840152, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8527, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=891, status=ACTIVE, subnets=['22ded562-fbb3-4edc-9697-33ec9a5064cf'], tags=[], tenant_id=b56bb63fac1149dd90331f94de840152, updated_at=2025-12-05T10:06:26Z, vlan_transparent=None, network_id=90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, port_security_enabled=False, project_id=b56bb63fac1149dd90331f94de840152, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=922, status=DOWN, tags=[], tenant_id=b56bb63fac1149dd90331f94de840152, updated_at=2025-12-05T10:06:31Z on network 90590255-dcbf-4ecb-8fcc-6e0ba44da1cd#033[00m Dec 5 05:06:34 localhost dnsmasq[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/addn_hosts - 1 addresses Dec 5 05:06:34 localhost podman[311560]: 2025-12-05 10:06:34.91546619 +0000 UTC m=+0.045997618 container kill fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:06:34 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/host Dec 5 05:06:34 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/opts Dec 5 05:06:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:35.118 261902 INFO neutron.agent.dhcp.agent [None req-abb001f2-152a-47ab-9003-92638c134fed - - - - - -] DHCP configuration for ports {'bf23bba8-fac7-4815-9663-190d300f0b9a'} is completed#033[00m Dec 5 05:06:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.9 MiB/s wr, 94 op/s Dec 5 05:06:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e123 do_prune osdmap full prune enabled Dec 5 05:06:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 e124: 6 total, 6 up, 6 in Dec 5 05:06:35 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e124: 6 total, 6 up, 6 in Dec 5 05:06:36 localhost nova_compute[280228]: 2025-12-05 10:06:36.746 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 4.2 MiB/s wr, 82 op/s Dec 5 05:06:38 localhost nova_compute[280228]: 2025-12-05 10:06:38.841 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:06:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:06:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:06:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 3.7 MiB/s wr, 71 op/s Dec 5 05:06:39 localhost podman[311581]: 2025-12-05 10:06:39.205548996 +0000 UTC m=+0.087032003 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:06:39 localhost podman[311581]: 2025-12-05 10:06:39.217501572 +0000 UTC m=+0.098984559 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:06:39 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:06:39 localhost podman[311582]: 2025-12-05 10:06:39.264480288 +0000 UTC m=+0.140844368 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:06:39 localhost podman[311583]: 2025-12-05 10:06:39.319545653 +0000 UTC m=+0.194558121 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:06:39 localhost podman[311583]: 2025-12-05 10:06:39.328380523 +0000 UTC m=+0.203392991 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:06:39 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:06:39 localhost podman[311582]: 2025-12-05 10:06:39.344671482 +0000 UTC m=+0.221035572 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:06:39 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:06:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:40.313 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:40 localhost nova_compute[280228]: 2025-12-05 10:06:40.965 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 511 B/s wr, 4 op/s Dec 5 05:06:41 localhost systemd[1]: tmp-crun.fWcy3G.mount: Deactivated successfully. Dec 5 05:06:41 localhost podman[311655]: 2025-12-05 10:06:41.376111986 +0000 UTC m=+0.073828468 container kill fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:06:41 localhost dnsmasq[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/addn_hosts - 0 addresses Dec 5 05:06:41 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/host Dec 5 05:06:41 localhost dnsmasq-dhcp[311505]: read /var/lib/neutron/dhcp/90590255-dcbf-4ecb-8fcc-6e0ba44da1cd/opts Dec 5 05:06:41 localhost ovn_controller[153000]: 2025-12-05T10:06:41Z|00162|binding|INFO|Releasing lport c640056b-d8ef-4efb-9518-ecda84d9531d from this chassis (sb_readonly=0) Dec 5 05:06:41 localhost kernel: device tapc640056b-d8 left promiscuous mode Dec 5 05:06:41 localhost nova_compute[280228]: 2025-12-05 10:06:41.533 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:41 localhost ovn_controller[153000]: 2025-12-05T10:06:41Z|00163|binding|INFO|Setting lport c640056b-d8ef-4efb-9518-ecda84d9531d down in Southbound Dec 5 05:06:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:41.541 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b56bb63fac1149dd90331f94de840152', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3153bdaa-2350-4c83-adb1-45bb4beb199d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c640056b-d8ef-4efb-9518-ecda84d9531d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:06:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:41.544 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c640056b-d8ef-4efb-9518-ecda84d9531d in datapath 90590255-dcbf-4ecb-8fcc-6e0ba44da1cd unbound from our chassis#033[00m Dec 5 05:06:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:41.550 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:06:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:41.551 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdba288-6504-40e3-8b78-df418628478b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:06:41 localhost nova_compute[280228]: 2025-12-05 10:06:41.555 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:41 localhost nova_compute[280228]: 2025-12-05 10:06:41.749 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:43 localhost nova_compute[280228]: 2025-12-05 10:06:43.844 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:06:45 Dec 5 05:06:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:06:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:06:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['.mgr', 'volumes', 'manila_data', 'backups', 'images', 'manila_metadata', 'vms'] Dec 5 05:06:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:06:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:06:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:06:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:06:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:06:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:45 localhost podman[311679]: 2025-12-05 10:06:45.199612995 +0000 UTC m=+0.085813406 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:06:45 localhost podman[311679]: 2025-12-05 10:06:45.208968181 +0000 UTC m=+0.095168602 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:06:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:06:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:06:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:06:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:06:45 localhost ovn_controller[153000]: 2025-12-05T10:06:45Z|00164|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:06:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 5 05:06:45 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:06:45 localhost nova_compute[280228]: 2025-12-05 10:06:45.262 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:06:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:06:45 localhost systemd[1]: tmp-crun.IRETJq.mount: Deactivated successfully. Dec 5 05:06:45 localhost podman[311678]: 2025-12-05 10:06:45.305297838 +0000 UTC m=+0.195280654 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 5 05:06:45 localhost podman[311678]: 2025-12-05 10:06:45.347656044 +0000 UTC m=+0.237638930 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 5 05:06:45 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:06:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:46 localhost podman[311742]: 2025-12-05 10:06:46.462621387 +0000 UTC m=+0.062100711 container kill fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:06:46 localhost dnsmasq[311505]: exiting on receipt of SIGTERM Dec 5 05:06:46 localhost systemd[1]: libpod-fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991.scope: Deactivated successfully. Dec 5 05:06:46 localhost nova_compute[280228]: 2025-12-05 10:06:46.505 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:46 localhost podman[311756]: 2025-12-05 10:06:46.538300402 +0000 UTC m=+0.062522804 container died fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:06:46 localhost podman[311756]: 2025-12-05 10:06:46.575337174 +0000 UTC m=+0.099559556 container cleanup fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:06:46 localhost systemd[1]: libpod-conmon-fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991.scope: Deactivated successfully. Dec 5 05:06:46 localhost podman[311758]: 2025-12-05 10:06:46.628621894 +0000 UTC m=+0.142467989 container remove fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90590255-dcbf-4ecb-8fcc-6e0ba44da1cd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:06:46 localhost nova_compute[280228]: 2025-12-05 10:06:46.752 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:47.187 261902 INFO neutron.agent.dhcp.agent [None req-791ff758-f9ba-4f1b-b82e-842cd943610f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:47.189 261902 INFO neutron.agent.dhcp.agent [None req-791ff758-f9ba-4f1b-b82e-842cd943610f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:47 localhost systemd[1]: var-lib-containers-storage-overlay-ebcce9f7689d51a10f06c9b2e0a22da85d8979132846169ddd842f06e7d89a9d-merged.mount: Deactivated successfully. Dec 5 05:06:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb6082c421047f8a8b3800529b824dca996c04fbb04af4d9629bcf34cf906991-userdata-shm.mount: Deactivated successfully. Dec 5 05:06:47 localhost systemd[1]: run-netns-qdhcp\x2d90590255\x2ddcbf\x2d4ecb\x2d8fcc\x2d6e0ba44da1cd.mount: Deactivated successfully. Dec 5 05:06:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:47.741 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:48.730 261902 INFO neutron.agent.linux.ip_lib [None req-97c9fe31-e8ea-4dbe-82dd-f8b6353ef0a3 - - - - - -] Device tap8f89be47-4f cannot be used as it has no MAC address#033[00m Dec 5 05:06:48 localhost nova_compute[280228]: 2025-12-05 10:06:48.752 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:48 localhost kernel: device tap8f89be47-4f entered promiscuous mode Dec 5 05:06:48 localhost NetworkManager[5960]: [1764929208.7628] manager: (tap8f89be47-4f): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Dec 5 05:06:48 localhost systemd-udevd[311796]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:06:48 localhost nova_compute[280228]: 2025-12-05 10:06:48.763 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:48 localhost ovn_controller[153000]: 2025-12-05T10:06:48Z|00165|binding|INFO|Claiming lport 8f89be47-4f84-45a7-9e9f-25dddaefcac5 for this chassis. Dec 5 05:06:48 localhost ovn_controller[153000]: 2025-12-05T10:06:48Z|00166|binding|INFO|8f89be47-4f84-45a7-9e9f-25dddaefcac5: Claiming unknown Dec 5 05:06:48 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:48.786 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-1556a36f-4e50-4389-b215-7fd51744a1e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1556a36f-4e50-4389-b215-7fd51744a1e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a87d6b02-c0f4-46f5-b440-dbd829b2c81f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8f89be47-4f84-45a7-9e9f-25dddaefcac5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:06:48 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:48.788 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 8f89be47-4f84-45a7-9e9f-25dddaefcac5 in datapath 1556a36f-4e50-4389-b215-7fd51744a1e6 bound to our chassis#033[00m Dec 5 05:06:48 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:48.789 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1556a36f-4e50-4389-b215-7fd51744a1e6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:06:48 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:48.790 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[dec7b050-d14d-4d80-a9b0-785fc3a89952]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost ovn_controller[153000]: 2025-12-05T10:06:48Z|00167|binding|INFO|Setting lport 8f89be47-4f84-45a7-9e9f-25dddaefcac5 ovn-installed in OVS Dec 5 05:06:48 localhost ovn_controller[153000]: 2025-12-05T10:06:48Z|00168|binding|INFO|Setting lport 8f89be47-4f84-45a7-9e9f-25dddaefcac5 up in Southbound Dec 5 05:06:48 localhost nova_compute[280228]: 2025-12-05 10:06:48.798 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost journal[228791]: ethtool ioctl error on tap8f89be47-4f: No such device Dec 5 05:06:48 localhost nova_compute[280228]: 2025-12-05 10:06:48.844 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:48 localhost nova_compute[280228]: 2025-12-05 10:06:48.847 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:48 localhost nova_compute[280228]: 2025-12-05 10:06:48.872 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:49 localhost podman[311867]: Dec 5 05:06:49 localhost podman[311867]: 2025-12-05 10:06:49.770234646 +0000 UTC m=+0.073425758 container create 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:06:49 localhost systemd[1]: Started libpod-conmon-79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f.scope. Dec 5 05:06:49 localhost podman[311867]: 2025-12-05 10:06:49.730110488 +0000 UTC m=+0.033301650 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:06:49 localhost systemd[1]: Started libcrun container. Dec 5 05:06:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98da5e9431ad66f21bd67c9140372fb22820915c81d27abb4d0c60666a148dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:06:49 localhost podman[311867]: 2025-12-05 10:06:49.848865281 +0000 UTC m=+0.152056403 container init 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:06:49 localhost podman[311867]: 2025-12-05 10:06:49.858550777 +0000 UTC m=+0.161741889 container start 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:06:49 localhost dnsmasq[311885]: started, version 2.85 cachesize 150 Dec 5 05:06:49 localhost dnsmasq[311885]: DNS service limited to local subnets Dec 5 05:06:49 localhost dnsmasq[311885]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:06:49 localhost dnsmasq[311885]: warning: no upstream servers configured Dec 5 05:06:49 localhost dnsmasq-dhcp[311885]: DHCP, static leases only on 10.100.255.240, lease time 1d Dec 5 05:06:49 localhost dnsmasq[311885]: read /var/lib/neutron/dhcp/1556a36f-4e50-4389-b215-7fd51744a1e6/addn_hosts - 0 addresses Dec 5 05:06:49 localhost dnsmasq-dhcp[311885]: read /var/lib/neutron/dhcp/1556a36f-4e50-4389-b215-7fd51744a1e6/host Dec 5 05:06:49 localhost dnsmasq-dhcp[311885]: read /var/lib/neutron/dhcp/1556a36f-4e50-4389-b215-7fd51744a1e6/opts Dec 5 05:06:49 localhost podman[239519]: time="2025-12-05T10:06:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:06:49 localhost podman[239519]: @ - - [05/Dec/2025:10:06:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157939 "" "Go-http-client/1.1" Dec 5 05:06:49 localhost podman[239519]: @ - - [05/Dec/2025:10:06:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19723 "" "Go-http-client/1.1" Dec 5 05:06:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:50.151 261902 INFO neutron.agent.dhcp.agent [None req-7f34f846-c9e6-47b9-b39c-89b1dee1766b - - - - - -] DHCP configuration for ports {'87ed8aff-abf5-4642-8d23-6d49786760cb'} is completed#033[00m Dec 5 05:06:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:51 localhost nova_compute[280228]: 2025-12-05 10:06:51.784 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:53 localhost nova_compute[280228]: 2025-12-05 10:06:53.847 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:54 localhost nova_compute[280228]: 2025-12-05 10:06:54.596 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost ovn_controller[153000]: 2025-12-05T10:06:55Z|00169|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:06:55 localhost nova_compute[280228]: 2025-12-05 10:06:55.148 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:06:55 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:55.781 261902 INFO neutron.agent.linux.ip_lib [None req-04248b2c-317d-47ec-8e02-e241f9e51ba5 - - - - - -] Device tap3a3ad0ab-dd cannot be used as it has no MAC address#033[00m Dec 5 05:06:55 localhost systemd[1]: tmp-crun.2ekY2s.mount: Deactivated successfully. Dec 5 05:06:55 localhost podman[311888]: 2025-12-05 10:06:55.803213814 +0000 UTC m=+0.100903807 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 05:06:55 localhost nova_compute[280228]: 2025-12-05 10:06:55.804 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost kernel: device tap3a3ad0ab-dd entered promiscuous mode Dec 5 05:06:55 localhost ovn_controller[153000]: 2025-12-05T10:06:55Z|00170|binding|INFO|Claiming lport 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 for this chassis. Dec 5 05:06:55 localhost ovn_controller[153000]: 2025-12-05T10:06:55Z|00171|binding|INFO|3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563: Claiming unknown Dec 5 05:06:55 localhost NetworkManager[5960]: [1764929215.8127] manager: (tap3a3ad0ab-dd): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Dec 5 05:06:55 localhost nova_compute[280228]: 2025-12-05 10:06:55.812 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost podman[311888]: 2025-12-05 10:06:55.815833531 +0000 UTC m=+0.113523534 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:06:55 localhost systemd-udevd[311924]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:06:55 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost ovn_controller[153000]: 2025-12-05T10:06:55Z|00172|binding|INFO|Setting lport 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 ovn-installed in OVS Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost nova_compute[280228]: 2025-12-05 10:06:55.855 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost journal[228791]: ethtool ioctl error on tap3a3ad0ab-dd: No such device Dec 5 05:06:55 localhost podman[311889]: 2025-12-05 10:06:55.897623512 +0000 UTC m=+0.191560270 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc.) Dec 5 05:06:55 localhost nova_compute[280228]: 2025-12-05 10:06:55.902 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost nova_compute[280228]: 2025-12-05 10:06:55.925 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:55 localhost podman[311889]: 2025-12-05 10:06:55.935288794 +0000 UTC m=+0.229225542 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:06:55 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:06:56 localhost ovn_controller[153000]: 2025-12-05T10:06:56Z|00173|binding|INFO|Setting lport 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 up in Southbound Dec 5 05:06:56 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:56.455 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-3aaef4b1-43d4-4852-a7c5-31c64c6f9494', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3aaef4b1-43d4-4852-a7c5-31c64c6f9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50771b4c128443a3b2365dd014698bb5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c2b6a10-e208-4277-a3fd-86f25aa6e50f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:06:56 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:56.456 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 in datapath 3aaef4b1-43d4-4852-a7c5-31c64c6f9494 bound to our chassis#033[00m Dec 5 05:06:56 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:56.458 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3aaef4b1-43d4-4852-a7c5-31c64c6f9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:06:56 localhost ovn_metadata_agent[158815]: 2025-12-05 10:06:56.459 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[62325896-2183-45cb-8cd5-5cbfb170f41a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:06:56 localhost systemd[1]: tmp-crun.ZYtWiA.mount: Deactivated successfully. Dec 5 05:06:56 localhost nova_compute[280228]: 2025-12-05 10:06:56.787 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:56 localhost podman[312003]: Dec 5 05:06:56 localhost podman[312003]: 2025-12-05 10:06:56.905541341 +0000 UTC m=+0.090989824 container create 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:06:56 localhost systemd[1]: Started libpod-conmon-3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732.scope. Dec 5 05:06:56 localhost podman[312003]: 2025-12-05 10:06:56.860611727 +0000 UTC m=+0.046060210 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:06:56 localhost systemd[1]: Started libcrun container. Dec 5 05:06:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f130424f501c3aff676719ce7026616a060d3c78c79d590536ff55434aebc00/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:06:56 localhost podman[312003]: 2025-12-05 10:06:56.971513078 +0000 UTC m=+0.156961501 container init 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:06:56 localhost podman[312003]: 2025-12-05 10:06:56.9780956 +0000 UTC m=+0.163544023 container start 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:06:56 localhost dnsmasq[312021]: started, version 2.85 cachesize 150 Dec 5 05:06:56 localhost dnsmasq[312021]: DNS service limited to local subnets Dec 5 05:06:56 localhost dnsmasq[312021]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:06:56 localhost dnsmasq[312021]: warning: no upstream servers configured Dec 5 05:06:56 localhost dnsmasq-dhcp[312021]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:06:56 localhost dnsmasq[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/addn_hosts - 0 addresses Dec 5 05:06:56 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/host Dec 5 05:06:56 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/opts Dec 5 05:06:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:57.164 261902 INFO neutron.agent.dhcp.agent [None req-e710f6ab-9deb-4311-b8ec-bb883a293371 - - - - - -] DHCP configuration for ports {'de066c49-44d8-4087-9ad7-e647c5e4f217'} is completed#033[00m Dec 5 05:06:57 localhost openstack_network_exporter[241668]: ERROR 10:06:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:06:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:57 localhost openstack_network_exporter[241668]: ERROR 10:06:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:06:57 localhost openstack_network_exporter[241668]: ERROR 10:06:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:06:57 localhost openstack_network_exporter[241668]: ERROR 10:06:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:06:57 localhost openstack_network_exporter[241668]: Dec 5 05:06:57 localhost openstack_network_exporter[241668]: ERROR 10:06:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:06:57 localhost openstack_network_exporter[241668]: Dec 5 05:06:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:06:57 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.976503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929217976539, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1649, "num_deletes": 263, "total_data_size": 1584349, "memory_usage": 1618248, "flush_reason": "Manual Compaction"} Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929217986660, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1549498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24470, "largest_seqno": 26118, "table_properties": {"data_size": 1542433, "index_size": 4087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15904, "raw_average_key_size": 21, "raw_value_size": 1527985, "raw_average_value_size": 2050, "num_data_blocks": 178, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929110, "oldest_key_time": 1764929110, "file_creation_time": 1764929217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10184 microseconds, and 3129 cpu microseconds. Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.986688) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1549498 bytes OK Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.986703) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.988368) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.988382) EVENT_LOG_v1 {"time_micros": 1764929217988377, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.988396) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1577091, prev total WAL file size 1593471, number of live WAL files 2. Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.989200) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1513KB)], [42(17MB)] Dec 5 05:06:57 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929217989339, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 19379291, "oldest_snapshot_seqno": -1} Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12080 keys, 16507911 bytes, temperature: kUnknown Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929218075165, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 16507911, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16440512, "index_size": 36130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 324581, "raw_average_key_size": 26, "raw_value_size": 16236166, "raw_average_value_size": 1344, "num_data_blocks": 1365, "num_entries": 12080, "num_filter_entries": 12080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.075594) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 16507911 bytes Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.077569) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.5 rd, 192.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.0 +0.0 blob) out(15.7 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 12619, records dropped: 539 output_compression: NoCompression Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.077592) EVENT_LOG_v1 {"time_micros": 1764929218077580, "job": 24, "event": "compaction_finished", "compaction_time_micros": 85958, "compaction_time_cpu_micros": 45724, "output_level": 6, "num_output_files": 1, "total_output_size": 16507911, "num_input_records": 12619, "num_output_records": 12080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929218077952, "job": 24, "event": "table_file_deletion", "file_number": 44} Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929218080437, "job": 24, "event": "table_file_deletion", "file_number": 42} Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:57.989074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.080511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.080519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.080522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.080526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:06:58 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:06:58.080529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:06:58.312 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:06:58 localhost neutron_sriov_agent[254996]: 2025-12-05 10:06:58.503 2 INFO neutron.agent.securitygroups_rpc [None req-d5329bb2-7f39-47e8-83b6-2e3a7f24ca0c 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:06:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost nova_compute[280228]: 2025-12-05 10:06:58.852 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:06:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:06:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:06:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:06:58 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 0b84442d-c6d3-47d1-8091-3262ec7e18b6 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:06:58 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 0b84442d-c6d3-47d1-8091-3262ec7e18b6 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:06:58 localhost ceph-mgr[286454]: [progress INFO root] Completed event 0b84442d-c6d3-47d1-8091-3262ec7e18b6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:06:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:06:58 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:06:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:06:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:06:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:07:00 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:07:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:07:00 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:07:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:01 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:07:01 localhost nova_compute[280228]: 2025-12-05 10:07:01.792 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:01 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:01.898 2 INFO neutron.agent.securitygroups_rpc [None req-9e30647c-bb8b-4e54-8626-2981d1fced38 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:02 localhost nova_compute[280228]: 2025-12-05 10:07:02.313 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:03 localhost nova_compute[280228]: 2025-12-05 10:07:03.853 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:03.913 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:07:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:03.914 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:07:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:03.915 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:07:04 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:04.077 2 INFO neutron.agent.securitygroups_rpc [None req-cd05dfd1-15ee-449d-9a9d-b36249599776 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:04 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:04.424 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:07:04Z, description=, device_id=bbdb7044-6821-4a79-b376-f2f9ac5dbb19, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3c93e9cd-d782-402b-8e7c-a211a7816d57, ip_allocation=immediate, mac_address=fa:16:3e:e9:15:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:53Z, description=, dns_domain=, id=3aaef4b1-43d4-4852-a7c5-31c64c6f9494, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-560565691-network, port_security_enabled=True, project_id=50771b4c128443a3b2365dd014698bb5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21345, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1058, status=ACTIVE, subnets=['36f6afd6-21a2-48ac-b6ef-6c0f81263ab6'], tags=[], tenant_id=50771b4c128443a3b2365dd014698bb5, updated_at=2025-12-05T10:06:54Z, vlan_transparent=None, network_id=3aaef4b1-43d4-4852-a7c5-31c64c6f9494, port_security_enabled=False, project_id=50771b4c128443a3b2365dd014698bb5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1089, status=DOWN, tags=[], tenant_id=50771b4c128443a3b2365dd014698bb5, updated_at=2025-12-05T10:07:04Z on network 3aaef4b1-43d4-4852-a7c5-31c64c6f9494#033[00m Dec 5 05:07:04 localhost dnsmasq[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/addn_hosts - 1 addresses Dec 5 05:07:04 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/host Dec 5 05:07:04 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/opts Dec 5 05:07:04 localhost podman[312183]: 2025-12-05 10:07:04.659558751 +0000 UTC m=+0.062128003 container kill 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:07:04 localhost systemd[1]: tmp-crun.VafWnu.mount: Deactivated successfully. Dec 5 05:07:04 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:04.868 261902 INFO neutron.agent.dhcp.agent [None req-d03e6969-aab4-40fe-abc3-a18eebcbf29d - - - - - -] DHCP configuration for ports {'3c93e9cd-d782-402b-8e7c-a211a7816d57'} is completed#033[00m Dec 5 05:07:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:05.174 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:07:04Z, description=, device_id=bbdb7044-6821-4a79-b376-f2f9ac5dbb19, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3c93e9cd-d782-402b-8e7c-a211a7816d57, ip_allocation=immediate, mac_address=fa:16:3e:e9:15:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:06:53Z, description=, dns_domain=, id=3aaef4b1-43d4-4852-a7c5-31c64c6f9494, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-560565691-network, port_security_enabled=True, project_id=50771b4c128443a3b2365dd014698bb5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21345, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1058, status=ACTIVE, subnets=['36f6afd6-21a2-48ac-b6ef-6c0f81263ab6'], tags=[], tenant_id=50771b4c128443a3b2365dd014698bb5, updated_at=2025-12-05T10:06:54Z, vlan_transparent=None, network_id=3aaef4b1-43d4-4852-a7c5-31c64c6f9494, port_security_enabled=False, project_id=50771b4c128443a3b2365dd014698bb5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1089, status=DOWN, tags=[], tenant_id=50771b4c128443a3b2365dd014698bb5, updated_at=2025-12-05T10:07:04Z on network 3aaef4b1-43d4-4852-a7c5-31c64c6f9494#033[00m Dec 5 05:07:05 localhost dnsmasq[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/addn_hosts - 1 addresses Dec 5 05:07:05 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/host Dec 5 05:07:05 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/opts Dec 5 05:07:05 localhost podman[312219]: 2025-12-05 10:07:05.383116561 +0000 UTC m=+0.061382848 container kill 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:07:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:05.668 261902 INFO neutron.agent.dhcp.agent [None req-6f3e5c2c-cd8b-4053-9f85-77e476c7dbc2 - - - - - -] DHCP configuration for ports {'3c93e9cd-d782-402b-8e7c-a211a7816d57'} is completed#033[00m Dec 5 05:07:05 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:05.715 2 INFO neutron.agent.securitygroups_rpc [None req-bc6b99ba-244a-4145-895a-01784236c368 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:05 localhost nova_compute[280228]: 2025-12-05 10:07:05.972 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:06 localhost nova_compute[280228]: 2025-12-05 10:07:06.796 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:07 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:07.370 2 INFO neutron.agent.securitygroups_rpc [None req-996755b2-015b-4c0b-97a6-4ea5ff6a5f09 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:08 localhost nova_compute[280228]: 2025-12-05 10:07:08.857 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:09.479 2 INFO neutron.agent.securitygroups_rpc [None req-2b543872-dbbb-45d0-a3c9-b80170d954e4 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:07:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:07:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:07:10 localhost podman[312241]: 2025-12-05 10:07:10.220379878 +0000 UTC m=+0.095979398 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Dec 5 05:07:10 localhost podman[312241]: 2025-12-05 10:07:10.264752384 +0000 UTC m=+0.140351904 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:07:10 localhost systemd[1]: tmp-crun.69wWBi.mount: Deactivated successfully. Dec 5 05:07:10 localhost podman[312240]: 2025-12-05 10:07:10.275963747 +0000 UTC m=+0.153615839 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 5 05:07:10 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:07:10 localhost podman[312240]: 2025-12-05 10:07:10.280748984 +0000 UTC m=+0.158401016 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:07:10 localhost podman[312239]: 2025-12-05 10:07:10.310996649 +0000 UTC m=+0.191390085 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:07:10 localhost podman[312239]: 2025-12-05 10:07:10.320959614 +0000 UTC m=+0.201353040 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:07:10 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:07:10 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:07:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:11 localhost systemd[1]: tmp-crun.pjhFNT.mount: Deactivated successfully. Dec 5 05:07:11 localhost podman[312316]: 2025-12-05 10:07:11.646788347 +0000 UTC m=+0.079420111 container kill 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:07:11 localhost dnsmasq[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/addn_hosts - 0 addresses Dec 5 05:07:11 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/host Dec 5 05:07:11 localhost dnsmasq-dhcp[312021]: read /var/lib/neutron/dhcp/3aaef4b1-43d4-4852-a7c5-31c64c6f9494/opts Dec 5 05:07:11 localhost nova_compute[280228]: 2025-12-05 10:07:11.820 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:11 localhost systemd-journald[47252]: Data hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Dec 5 05:07:11 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 05:07:11 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 05:07:11 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 05:07:12 localhost ovn_controller[153000]: 2025-12-05T10:07:12Z|00174|binding|INFO|Releasing lport 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 from this chassis (sb_readonly=0) Dec 5 05:07:12 localhost ovn_controller[153000]: 2025-12-05T10:07:12Z|00175|binding|INFO|Setting lport 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 down in Southbound Dec 5 05:07:12 localhost kernel: device tap3a3ad0ab-dd left promiscuous mode Dec 5 05:07:12 localhost nova_compute[280228]: 2025-12-05 10:07:12.069 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:12.081 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-3aaef4b1-43d4-4852-a7c5-31c64c6f9494', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3aaef4b1-43d4-4852-a7c5-31c64c6f9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50771b4c128443a3b2365dd014698bb5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c2b6a10-e208-4277-a3fd-86f25aa6e50f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:07:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:12.083 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 3a3ad0ab-dd18-4dd4-8e5f-5a280d33c563 in datapath 3aaef4b1-43d4-4852-a7c5-31c64c6f9494 unbound from our chassis#033[00m Dec 5 05:07:12 localhost nova_compute[280228]: 2025-12-05 10:07:12.086 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:12.088 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3aaef4b1-43d4-4852-a7c5-31c64c6f9494, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:07:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:12.089 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b15a61f0-9d23-4ffb-8e19-6c618b7a8f1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:07:12 localhost nova_compute[280228]: 2025-12-05 10:07:12.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:13 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:13.376 2 INFO neutron.agent.securitygroups_rpc [None req-93bbc332-f3c5-4877-b9fb-2efd8bb0321e 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:13 localhost nova_compute[280228]: 2025-12-05 10:07:13.861 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:07:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:07:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:07:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:07:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:07:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:07:15 localhost ovn_controller[153000]: 2025-12-05T10:07:15Z|00176|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.456 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:07:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.703 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.704 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.704 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:07:15 localhost nova_compute[280228]: 2025-12-05 10:07:15.704 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:07:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:07:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:07:16 localhost podman[312341]: 2025-12-05 10:07:16.201413178 +0000 UTC m=+0.089731066 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 05:07:16 localhost podman[312342]: 2025-12-05 10:07:16.263189347 +0000 UTC m=+0.145138360 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:07:16 localhost podman[312342]: 2025-12-05 10:07:16.271980826 +0000 UTC m=+0.153929849 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:07:16 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:07:16 localhost podman[312341]: 2025-12-05 10:07:16.292086241 +0000 UTC m=+0.180404099 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:07:16 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:07:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:16.738 2 INFO neutron.agent.securitygroups_rpc [None req-3ef0a6b0-ef4d-43c6-a568-29535e1c7f80 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:16 localhost nova_compute[280228]: 2025-12-05 10:07:16.856 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.510 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.530 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.531 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.551 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.552 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.552 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.553 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.553 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:07:17 localhost ovn_controller[153000]: 2025-12-05T10:07:17Z|00177|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:07:17 localhost nova_compute[280228]: 2025-12-05 10:07:17.959 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:07:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/640022840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.052 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.118 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.119 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:07:18 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:18.161 2 INFO neutron.agent.securitygroups_rpc [None req-404e8085-b656-45b6-bed4-f502f810c658 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.358 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.359 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11213MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.360 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.360 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.443 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.444 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:07:18 localhost dnsmasq[312021]: exiting on receipt of SIGTERM Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.444 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:07:18 localhost systemd[1]: tmp-crun.R8jFff.mount: Deactivated successfully. Dec 5 05:07:18 localhost podman[312426]: 2025-12-05 10:07:18.454225804 +0000 UTC m=+0.065459564 container kill 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:07:18 localhost systemd[1]: libpod-3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732.scope: Deactivated successfully. Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.500 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:07:18 localhost podman[312440]: 2025-12-05 10:07:18.533706894 +0000 UTC m=+0.060791519 container died 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:07:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732-userdata-shm.mount: Deactivated successfully. Dec 5 05:07:18 localhost podman[312440]: 2025-12-05 10:07:18.621987595 +0000 UTC m=+0.149072200 container cleanup 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:07:18 localhost systemd[1]: libpod-conmon-3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732.scope: Deactivated successfully. Dec 5 05:07:18 localhost podman[312441]: 2025-12-05 10:07:18.666094103 +0000 UTC m=+0.188393323 container remove 3aa1acb1b774902358d5d1840ed903050d90db01caf9fc81db41ba6c1aa0a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3aaef4b1-43d4-4852-a7c5-31c64c6f9494, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:07:18 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:18.719 261902 INFO neutron.agent.dhcp.agent [None req-d526fae1-9eca-49a1-86bc-6b523b6be223 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:07:18 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:18.720 261902 INFO neutron.agent.dhcp.agent [None req-d526fae1-9eca-49a1-86bc-6b523b6be223 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.864 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:07:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1000473937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.913 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.920 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.942 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.946 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:07:18 localhost nova_compute[280228]: 2025-12-05 10:07:18.946 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:07:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:19 localhost systemd[1]: var-lib-containers-storage-overlay-7f130424f501c3aff676719ce7026616a060d3c78c79d590536ff55434aebc00-merged.mount: Deactivated successfully. Dec 5 05:07:19 localhost systemd[1]: run-netns-qdhcp\x2d3aaef4b1\x2d43d4\x2d4852\x2da7c5\x2d31c64c6f9494.mount: Deactivated successfully. Dec 5 05:07:19 localhost podman[239519]: time="2025-12-05T10:07:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:07:19 localhost podman[239519]: @ - - [05/Dec/2025:10:07:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157939 "" "Go-http-client/1.1" Dec 5 05:07:19 localhost nova_compute[280228]: 2025-12-05 10:07:19.923 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:19 localhost nova_compute[280228]: 2025-12-05 10:07:19.923 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:19 localhost podman[239519]: @ - - [05/Dec/2025:10:07:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19723 "" "Go-http-client/1.1" Dec 5 05:07:19 localhost nova_compute[280228]: 2025-12-05 10:07:19.955 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:19 localhost nova_compute[280228]: 2025-12-05 10:07:19.956 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:19 localhost nova_compute[280228]: 2025-12-05 10:07:19.956 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:07:20 localhost nova_compute[280228]: 2025-12-05 10:07:20.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:20.521 2 INFO neutron.agent.securitygroups_rpc [None req-3620aa62-41f0-4b22-a239-903391f500d0 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:21 localhost nova_compute[280228]: 2025-12-05 10:07:21.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:21 localhost nova_compute[280228]: 2025-12-05 10:07:21.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:07:21 localhost nova_compute[280228]: 2025-12-05 10:07:21.860 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:22.010 2 INFO neutron.agent.securitygroups_rpc [None req-d7be546d-b50c-461e-88aa-7aeab4bd324a 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:22.023 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:07:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:22.079 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:07:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:22.080 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:07:22 localhost nova_compute[280228]: 2025-12-05 10:07:22.081 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:23 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:23.269 2 INFO neutron.agent.securitygroups_rpc [None req-79871f0e-1ec9-49b3-8ee1-b4966cac2e7e 096de93086854e3cbb569b667f7e01d0 0cefdb91314945d68df9707c371a6860 - - default default] Security group member updated ['f0e83bd5-78e5-48e9-8c9f-5f2dbda27ca1']#033[00m Dec 5 05:07:23 localhost nova_compute[280228]: 2025-12-05 10:07:23.867 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:07:26 localhost podman[312489]: 2025-12-05 10:07:26.195591885 +0000 UTC m=+0.081311848 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 5 05:07:26 localhost podman[312489]: 2025-12-05 10:07:26.209711137 +0000 UTC m=+0.095431140 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:07:26 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:07:26 localhost systemd[1]: tmp-crun.G7xAHa.mount: Deactivated successfully. Dec 5 05:07:26 localhost podman[312490]: 2025-12-05 10:07:26.259128229 +0000 UTC m=+0.141149088 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:07:26 localhost podman[312490]: 2025-12-05 10:07:26.277660586 +0000 UTC m=+0.159681495 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm) Dec 5 05:07:26 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:07:26 localhost nova_compute[280228]: 2025-12-05 10:07:26.863 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:27 localhost openstack_network_exporter[241668]: ERROR 10:07:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:07:27 localhost openstack_network_exporter[241668]: ERROR 10:07:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:07:27 localhost openstack_network_exporter[241668]: ERROR 10:07:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:07:27 localhost openstack_network_exporter[241668]: ERROR 10:07:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:07:27 localhost openstack_network_exporter[241668]: Dec 5 05:07:27 localhost openstack_network_exporter[241668]: ERROR 10:07:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:07:27 localhost openstack_network_exporter[241668]: Dec 5 05:07:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:28 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:28.155 261902 INFO neutron.agent.linux.ip_lib [None req-bdcbf4be-0f1e-4800-ac71-233042d3bbc6 - - - - - -] Device tapc58185b5-39 cannot be used as it has no MAC address#033[00m Dec 5 05:07:28 localhost nova_compute[280228]: 2025-12-05 10:07:28.215 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:28 localhost kernel: device tapc58185b5-39 entered promiscuous mode Dec 5 05:07:28 localhost NetworkManager[5960]: [1764929248.2239] manager: (tapc58185b5-39): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Dec 5 05:07:28 localhost nova_compute[280228]: 2025-12-05 10:07:28.225 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:28 localhost ovn_controller[153000]: 2025-12-05T10:07:28Z|00178|binding|INFO|Claiming lport c58185b5-39eb-41df-8d9e-e2c5d89266f1 for this chassis. Dec 5 05:07:28 localhost ovn_controller[153000]: 2025-12-05T10:07:28Z|00179|binding|INFO|c58185b5-39eb-41df-8d9e-e2c5d89266f1: Claiming unknown Dec 5 05:07:28 localhost systemd-udevd[312540]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:07:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:28.238 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-94793bd6-8f0c-4da2-a172-a586f9ad1767', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94793bd6-8f0c-4da2-a172-a586f9ad1767', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66d239f897344e32a45021afe789a344', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ed42224-a27e-44b7-8486-af6f2804e895, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c58185b5-39eb-41df-8d9e-e2c5d89266f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:07:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:28.240 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c58185b5-39eb-41df-8d9e-e2c5d89266f1 in datapath 94793bd6-8f0c-4da2-a172-a586f9ad1767 bound to our chassis#033[00m Dec 5 05:07:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:28.242 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3a9a3049-9fcb-4ccb-ac3f-bf514c4d38fc IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:07:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:28.243 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94793bd6-8f0c-4da2-a172-a586f9ad1767, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:07:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:28.244 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fa3692-6b4d-44d0-b67c-e6180feec557]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost ovn_controller[153000]: 2025-12-05T10:07:28Z|00180|binding|INFO|Setting lport c58185b5-39eb-41df-8d9e-e2c5d89266f1 ovn-installed in OVS Dec 5 05:07:28 localhost ovn_controller[153000]: 2025-12-05T10:07:28Z|00181|binding|INFO|Setting lport c58185b5-39eb-41df-8d9e-e2c5d89266f1 up in Southbound Dec 5 05:07:28 localhost nova_compute[280228]: 2025-12-05 10:07:28.266 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost journal[228791]: ethtool ioctl error on tapc58185b5-39: No such device Dec 5 05:07:28 localhost nova_compute[280228]: 2025-12-05 10:07:28.313 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:28 localhost nova_compute[280228]: 2025-12-05 10:07:28.341 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:28 localhost nova_compute[280228]: 2025-12-05 10:07:28.869 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:29 localhost podman[312612]: Dec 5 05:07:29 localhost podman[312612]: 2025-12-05 10:07:29.205438627 +0000 UTC m=+0.095092689 container create c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:07:29 localhost systemd[1]: Started libpod-conmon-c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416.scope. Dec 5 05:07:29 localhost podman[312612]: 2025-12-05 10:07:29.162363879 +0000 UTC m=+0.052017971 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:07:29 localhost systemd[1]: Started libcrun container. Dec 5 05:07:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd799daccbdaf988fc1e9db0b3a7635e96e866b6396b913909dff17c21cc72a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:07:29 localhost podman[312612]: 2025-12-05 10:07:29.284605949 +0000 UTC m=+0.174259991 container init c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:07:29 localhost podman[312612]: 2025-12-05 10:07:29.294043237 +0000 UTC m=+0.183697279 container start c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:07:29 localhost dnsmasq[312630]: started, version 2.85 cachesize 150 Dec 5 05:07:29 localhost dnsmasq[312630]: DNS service limited to local subnets Dec 5 05:07:29 localhost dnsmasq[312630]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:07:29 localhost dnsmasq[312630]: warning: no upstream servers configured Dec 5 05:07:29 localhost dnsmasq-dhcp[312630]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:07:29 localhost dnsmasq[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/addn_hosts - 0 addresses Dec 5 05:07:29 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/host Dec 5 05:07:29 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/opts Dec 5 05:07:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:29.438 261902 INFO neutron.agent.dhcp.agent [None req-c5b58e8f-4cd8-4e24-a0e8-c5ef5edb88a5 - - - - - -] DHCP configuration for ports {'d1926565-c109-4302-ac40-ad49a05d3243'} is completed#033[00m Dec 5 05:07:30 localhost nova_compute[280228]: 2025-12-05 10:07:30.271 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:30 localhost nova_compute[280228]: 2025-12-05 10:07:30.421 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:31.083 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:07:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:31 localhost nova_compute[280228]: 2025-12-05 10:07:31.925 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:32.283 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:07:31Z, description=, device_id=93a35a33-008b-4275-9c46-78d3ee00ee36, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9d0038a9-03d1-4e5d-ba3f-e616d9255bf6, ip_allocation=immediate, mac_address=fa:16:3e:e1:ab:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:07:25Z, description=, dns_domain=, id=94793bd6-8f0c-4da2-a172-a586f9ad1767, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1843224683-network, port_security_enabled=True, project_id=66d239f897344e32a45021afe789a344, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48870, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1170, status=ACTIVE, subnets=['390fab66-e1c6-4ac7-9d5d-22a2c087bfaa'], tags=[], tenant_id=66d239f897344e32a45021afe789a344, updated_at=2025-12-05T10:07:26Z, vlan_transparent=None, network_id=94793bd6-8f0c-4da2-a172-a586f9ad1767, port_security_enabled=False, project_id=66d239f897344e32a45021afe789a344, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1201, status=DOWN, tags=[], tenant_id=66d239f897344e32a45021afe789a344, updated_at=2025-12-05T10:07:31Z on network 94793bd6-8f0c-4da2-a172-a586f9ad1767#033[00m Dec 5 05:07:32 localhost dnsmasq[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/addn_hosts - 1 addresses Dec 5 05:07:32 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/host Dec 5 05:07:32 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/opts Dec 5 05:07:32 localhost podman[312647]: 2025-12-05 10:07:32.633432969 +0000 UTC m=+0.077006876 container kill c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:07:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:32.912 261902 INFO neutron.agent.dhcp.agent [None req-e33267de-a9ff-45bb-bd7b-4a25fb30bc23 - - - - - -] DHCP configuration for ports {'9d0038a9-03d1-4e5d-ba3f-e616d9255bf6'} is completed#033[00m Dec 5 05:07:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:33 localhost nova_compute[280228]: 2025-12-05 10:07:33.983 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:35.823 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:07:31Z, description=, device_id=93a35a33-008b-4275-9c46-78d3ee00ee36, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9d0038a9-03d1-4e5d-ba3f-e616d9255bf6, ip_allocation=immediate, mac_address=fa:16:3e:e1:ab:a8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:07:25Z, description=, dns_domain=, id=94793bd6-8f0c-4da2-a172-a586f9ad1767, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1843224683-network, port_security_enabled=True, project_id=66d239f897344e32a45021afe789a344, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48870, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1170, status=ACTIVE, subnets=['390fab66-e1c6-4ac7-9d5d-22a2c087bfaa'], tags=[], tenant_id=66d239f897344e32a45021afe789a344, updated_at=2025-12-05T10:07:26Z, vlan_transparent=None, network_id=94793bd6-8f0c-4da2-a172-a586f9ad1767, port_security_enabled=False, project_id=66d239f897344e32a45021afe789a344, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1201, status=DOWN, tags=[], tenant_id=66d239f897344e32a45021afe789a344, updated_at=2025-12-05T10:07:31Z on network 94793bd6-8f0c-4da2-a172-a586f9ad1767#033[00m Dec 5 05:07:36 localhost dnsmasq[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/addn_hosts - 1 addresses Dec 5 05:07:36 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/host Dec 5 05:07:36 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/opts Dec 5 05:07:36 localhost podman[312684]: 2025-12-05 10:07:36.084033592 +0000 UTC m=+0.060506272 container kill c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:07:36 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:36.530 261902 INFO neutron.agent.dhcp.agent [None req-f23e379f-1665-4730-9eca-36a6be90e4ad - - - - - -] DHCP configuration for ports {'9d0038a9-03d1-4e5d-ba3f-e616d9255bf6'} is completed#033[00m Dec 5 05:07:36 localhost nova_compute[280228]: 2025-12-05 10:07:36.928 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:37 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:37.180 2 INFO neutron.agent.securitygroups_rpc [None req-b11c3a7e-3730-4dce-9b71-9d82cc4e3184 2f90c5186cc14a0a8a8f7faf3454b78f 0b296e0ab4b6447982bcfc680b8ba396 - - default default] Security group member updated ['13eefb8b-3a4b-4bb6-80e0-07e6a1e0bd51']#033[00m Dec 5 05:07:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:38 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:38.696 2 INFO neutron.agent.securitygroups_rpc [None req-5d7e3fb7-310f-4118-b811-99351dc54ebc 2f90c5186cc14a0a8a8f7faf3454b78f 0b296e0ab4b6447982bcfc680b8ba396 - - default default] Security group member updated ['13eefb8b-3a4b-4bb6-80e0-07e6a1e0bd51']#033[00m Dec 5 05:07:38 localhost nova_compute[280228]: 2025-12-05 10:07:38.996 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:39 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:39.443 261902 INFO neutron.agent.linux.ip_lib [None req-6ca1734f-62b5-4761-8f39-b87603093342 - - - - - -] Device tap4a2160f7-00 cannot be used as it has no MAC address#033[00m Dec 5 05:07:39 localhost nova_compute[280228]: 2025-12-05 10:07:39.469 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:39 localhost kernel: device tap4a2160f7-00 entered promiscuous mode Dec 5 05:07:39 localhost NetworkManager[5960]: [1764929259.4763] manager: (tap4a2160f7-00): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Dec 5 05:07:39 localhost ovn_controller[153000]: 2025-12-05T10:07:39Z|00182|binding|INFO|Claiming lport 4a2160f7-00ae-4965-aecc-811c3bbe7afa for this chassis. Dec 5 05:07:39 localhost ovn_controller[153000]: 2025-12-05T10:07:39Z|00183|binding|INFO|4a2160f7-00ae-4965-aecc-811c3bbe7afa: Claiming unknown Dec 5 05:07:39 localhost nova_compute[280228]: 2025-12-05 10:07:39.476 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:39 localhost systemd-udevd[312715]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:07:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:39.494 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-bf555836-a714-405e-80f4-3363738d336f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf555836-a714-405e-80f4-3363738d336f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef89a8f-3324-46d8-92a9-32bce5c0d06b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a2160f7-00ae-4965-aecc-811c3bbe7afa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:07:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:39.495 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 4a2160f7-00ae-4965-aecc-811c3bbe7afa in datapath bf555836-a714-405e-80f4-3363738d336f bound to our chassis#033[00m Dec 5 05:07:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:39.497 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bf555836-a714-405e-80f4-3363738d336f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:07:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:39.497 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[68b2e5db-e6d1-475b-b1dd-c69ba86d2acf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost ovn_controller[153000]: 2025-12-05T10:07:39Z|00184|binding|INFO|Setting lport 4a2160f7-00ae-4965-aecc-811c3bbe7afa ovn-installed in OVS Dec 5 05:07:39 localhost ovn_controller[153000]: 2025-12-05T10:07:39Z|00185|binding|INFO|Setting lport 4a2160f7-00ae-4965-aecc-811c3bbe7afa up in Southbound Dec 5 05:07:39 localhost nova_compute[280228]: 2025-12-05 10:07:39.530 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost journal[228791]: ethtool ioctl error on tap4a2160f7-00: No such device Dec 5 05:07:39 localhost nova_compute[280228]: 2025-12-05 10:07:39.573 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:39 localhost nova_compute[280228]: 2025-12-05 10:07:39.605 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:40 localhost podman[312784]: Dec 5 05:07:40 localhost podman[312784]: 2025-12-05 10:07:40.938750615 +0000 UTC m=+0.092121079 container create 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:07:40 localhost systemd[1]: Started libpod-conmon-1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c.scope. Dec 5 05:07:40 localhost podman[312784]: 2025-12-05 10:07:40.893334286 +0000 UTC m=+0.046704800 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:07:41 localhost systemd[1]: Started libcrun container. Dec 5 05:07:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a445e260d9b3f1fef4c1879ee666dbae2a1be617d17d31307d78175fb320a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:07:41 localhost podman[312784]: 2025-12-05 10:07:41.081510672 +0000 UTC m=+0.234881146 container init 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:07:41 localhost podman[312784]: 2025-12-05 10:07:41.091603761 +0000 UTC m=+0.244974205 container start 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:07:41 localhost dnsmasq[312863]: started, version 2.85 cachesize 150 Dec 5 05:07:41 localhost dnsmasq[312863]: DNS service limited to local subnets Dec 5 05:07:41 localhost dnsmasq[312863]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:07:41 localhost dnsmasq[312863]: warning: no upstream servers configured Dec 5 05:07:41 localhost dnsmasq-dhcp[312863]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:07:41 localhost dnsmasq[312863]: read /var/lib/neutron/dhcp/bf555836-a714-405e-80f4-3363738d336f/addn_hosts - 0 addresses Dec 5 05:07:41 localhost dnsmasq-dhcp[312863]: read /var/lib/neutron/dhcp/bf555836-a714-405e-80f4-3363738d336f/host Dec 5 05:07:41 localhost dnsmasq-dhcp[312863]: read /var/lib/neutron/dhcp/bf555836-a714-405e-80f4-3363738d336f/opts Dec 5 05:07:41 localhost podman[312799]: 2025-12-05 10:07:41.073353892 +0000 UTC m=+0.094156621 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:07:41 localhost podman[312801]: 2025-12-05 10:07:41.097446299 +0000 UTC m=+0.108865261 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:07:41 localhost podman[312799]: 2025-12-05 10:07:41.156720242 +0000 UTC m=+0.177522941 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:07:41 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:07:41 localhost podman[312801]: 2025-12-05 10:07:41.179653253 +0000 UTC m=+0.191072235 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 05:07:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:41 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:07:41 localhost dnsmasq[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/addn_hosts - 0 addresses Dec 5 05:07:41 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/host Dec 5 05:07:41 localhost podman[312868]: 2025-12-05 10:07:41.223742212 +0000 UTC m=+0.103343932 container kill c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:07:41 localhost dnsmasq-dhcp[312630]: read /var/lib/neutron/dhcp/94793bd6-8f0c-4da2-a172-a586f9ad1767/opts Dec 5 05:07:41 localhost podman[312800]: 2025-12-05 10:07:41.240468664 +0000 UTC m=+0.258827048 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:07:41 localhost podman[312800]: 2025-12-05 10:07:41.273765752 +0000 UTC m=+0.292124096 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:07:41 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:07:41 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:41.297 261902 INFO neutron.agent.dhcp.agent [None req-2f30d63c-70d3-4303-9630-42a433c76873 - - - - - -] DHCP configuration for ports {'d7b39b5b-8537-4369-a8d7-f4cf010ecf15'} is completed#033[00m Dec 5 05:07:41 localhost ovn_controller[153000]: 2025-12-05T10:07:41Z|00186|binding|INFO|Releasing lport c58185b5-39eb-41df-8d9e-e2c5d89266f1 from this chassis (sb_readonly=0) Dec 5 05:07:41 localhost ovn_controller[153000]: 2025-12-05T10:07:41Z|00187|binding|INFO|Setting lport c58185b5-39eb-41df-8d9e-e2c5d89266f1 down in Southbound Dec 5 05:07:41 localhost kernel: device tapc58185b5-39 left promiscuous mode Dec 5 05:07:41 localhost nova_compute[280228]: 2025-12-05 10:07:41.456 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:41 localhost nova_compute[280228]: 2025-12-05 10:07:41.476 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:41.554 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-94793bd6-8f0c-4da2-a172-a586f9ad1767', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94793bd6-8f0c-4da2-a172-a586f9ad1767', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '66d239f897344e32a45021afe789a344', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ed42224-a27e-44b7-8486-af6f2804e895, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c58185b5-39eb-41df-8d9e-e2c5d89266f1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:07:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:41.557 158820 INFO neutron.agent.ovn.metadata.agent [-] Port c58185b5-39eb-41df-8d9e-e2c5d89266f1 in datapath 94793bd6-8f0c-4da2-a172-a586f9ad1767 unbound from our chassis#033[00m Dec 5 05:07:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:41.561 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94793bd6-8f0c-4da2-a172-a586f9ad1767, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:07:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:41.562 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b7da2ebc-a810-470e-8740-0cffbde48f0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:07:41 localhost nova_compute[280228]: 2025-12-05 10:07:41.931 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:42 localhost ovn_controller[153000]: 2025-12-05T10:07:42Z|00188|binding|INFO|Removing iface tap4a2160f7-00 ovn-installed in OVS Dec 5 05:07:42 localhost ovn_controller[153000]: 2025-12-05T10:07:42Z|00189|binding|INFO|Removing lport 4a2160f7-00ae-4965-aecc-811c3bbe7afa ovn-installed in OVS Dec 5 05:07:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:42.933 158820 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 151e3496-002c-4d64-888e-5e0dc3daad3a with type ""#033[00m Dec 5 05:07:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:42.935 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-bf555836-a714-405e-80f4-3363738d336f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf555836-a714-405e-80f4-3363738d336f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cef89a8f-3324-46d8-92a9-32bce5c0d06b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a2160f7-00ae-4965-aecc-811c3bbe7afa) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:07:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:42.937 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 4a2160f7-00ae-4965-aecc-811c3bbe7afa in datapath bf555836-a714-405e-80f4-3363738d336f unbound from our chassis#033[00m Dec 5 05:07:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:42.940 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf555836-a714-405e-80f4-3363738d336f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:07:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:07:42.941 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[f70c809b-2ca0-4584-8eef-09ecfed3ae3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:07:42 localhost nova_compute[280228]: 2025-12-05 10:07:42.972 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:42 localhost nova_compute[280228]: 2025-12-05 10:07:42.977 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:42 localhost kernel: device tap4a2160f7-00 left promiscuous mode Dec 5 05:07:42 localhost nova_compute[280228]: 2025-12-05 10:07:42.997 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:43 localhost dnsmasq[312863]: read /var/lib/neutron/dhcp/bf555836-a714-405e-80f4-3363738d336f/addn_hosts - 0 addresses Dec 5 05:07:43 localhost dnsmasq-dhcp[312863]: read /var/lib/neutron/dhcp/bf555836-a714-405e-80f4-3363738d336f/host Dec 5 05:07:43 localhost dnsmasq-dhcp[312863]: read /var/lib/neutron/dhcp/bf555836-a714-405e-80f4-3363738d336f/opts Dec 5 05:07:43 localhost podman[312917]: 2025-12-05 10:07:43.475724203 +0000 UTC m=+0.056098457 container kill 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent [None req-a5e6b37c-9f64-4aab-b995-ef8ab218a39f - - - - - -] Unable to reload_allocations dhcp for bf555836-a714-405e-80f4-3363738d336f.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap4a2160f7-00 not found in namespace qdhcp-bf555836-a714-405e-80f4-3363738d336f. Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent return fut.result() Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent raise self._exception Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap4a2160f7-00 not found in namespace qdhcp-bf555836-a714-405e-80f4-3363738d336f. Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.502 261902 ERROR neutron.agent.dhcp.agent #033[00m Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.507 261902 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.799 261902 INFO neutron.agent.dhcp.agent [None req-943f2ce9-b4f5-4c36-a3fe-95d8833eb903 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.801 261902 INFO neutron.agent.dhcp.agent [-] Starting network bf555836-a714-405e-80f4-3363738d336f dhcp configuration#033[00m Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.801 261902 INFO neutron.agent.dhcp.agent [-] Finished network bf555836-a714-405e-80f4-3363738d336f dhcp configuration#033[00m Dec 5 05:07:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:43.802 261902 INFO neutron.agent.dhcp.agent [None req-943f2ce9-b4f5-4c36-a3fe-95d8833eb903 - - - - - -] Synchronizing state complete#033[00m Dec 5 05:07:43 localhost ovn_controller[153000]: 2025-12-05T10:07:43Z|00190|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:07:43 localhost nova_compute[280228]: 2025-12-05 10:07:43.964 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:44 localhost nova_compute[280228]: 2025-12-05 10:07:44.043 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:44 localhost dnsmasq[312863]: exiting on receipt of SIGTERM Dec 5 05:07:44 localhost podman[312948]: 2025-12-05 10:07:44.089127814 +0000 UTC m=+0.071530099 container kill 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:07:44 localhost systemd[1]: tmp-crun.aukEkj.mount: Deactivated successfully. Dec 5 05:07:44 localhost systemd[1]: libpod-1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c.scope: Deactivated successfully. Dec 5 05:07:44 localhost podman[312963]: 2025-12-05 10:07:44.165593892 +0000 UTC m=+0.056222340 container died 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:07:44 localhost podman[312963]: 2025-12-05 10:07:44.262815436 +0000 UTC m=+0.153443834 container remove 1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf555836-a714-405e-80f4-3363738d336f, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:07:44 localhost systemd[1]: libpod-conmon-1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c.scope: Deactivated successfully. Dec 5 05:07:44 localhost systemd[1]: var-lib-containers-storage-overlay-46a445e260d9b3f1fef4c1879ee666dbae2a1be617d17d31307d78175fb320a7-merged.mount: Deactivated successfully. Dec 5 05:07:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a95dd0f60b0944408a425779bd910002a7a0f18a652f85d797a16d7c6409c0c-userdata-shm.mount: Deactivated successfully. Dec 5 05:07:44 localhost systemd[1]: run-netns-qdhcp\x2dbf555836\x2da714\x2d405e\x2d80f4\x2d3363738d336f.mount: Deactivated successfully. Dec 5 05:07:44 localhost ovn_controller[153000]: 2025-12-05T10:07:44Z|00191|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:07:44 localhost nova_compute[280228]: 2025-12-05 10:07:44.876 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:07:45 Dec 5 05:07:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:07:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:07:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['vms', 'images', 'backups', 'volumes', 'manila_metadata', '.mgr', 'manila_data'] Dec 5 05:07:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:07:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:07:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:07:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:07:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:07:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:07:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:07:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:07:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:07:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:46 localhost nova_compute[280228]: 2025-12-05 10:07:46.933 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:07:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:07:47 localhost systemd[1]: tmp-crun.810iXh.mount: Deactivated successfully. Dec 5 05:07:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:47 localhost podman[312991]: 2025-12-05 10:07:47.194769634 +0000 UTC m=+0.082339359 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller) Dec 5 05:07:47 localhost podman[312991]: 2025-12-05 10:07:47.235207512 +0000 UTC m=+0.122777217 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:07:47 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:07:47 localhost podman[312992]: 2025-12-05 10:07:47.279446384 +0000 UTC m=+0.160701766 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:07:47 localhost podman[312992]: 2025-12-05 10:07:47.291470912 +0000 UTC m=+0.172726324 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:07:47 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:07:47 localhost sshd[313036]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:07:47 localhost dnsmasq[312630]: exiting on receipt of SIGTERM Dec 5 05:07:47 localhost podman[313052]: 2025-12-05 10:07:47.844539499 +0000 UTC m=+0.064825054 container kill c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:07:47 localhost systemd[1]: libpod-c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416.scope: Deactivated successfully. Dec 5 05:07:47 localhost podman[313066]: 2025-12-05 10:07:47.927660172 +0000 UTC m=+0.059246283 container died c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:07:47 localhost podman[313066]: 2025-12-05 10:07:47.968634044 +0000 UTC m=+0.100220095 container remove c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-94793bd6-8f0c-4da2-a172-a586f9ad1767, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 5 05:07:47 localhost systemd[1]: libpod-conmon-c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416.scope: Deactivated successfully. Dec 5 05:07:48 localhost systemd[1]: var-lib-containers-storage-overlay-ffd799daccbdaf988fc1e9db0b3a7635e96e866b6396b913909dff17c21cc72a-merged.mount: Deactivated successfully. Dec 5 05:07:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c297817e0b0ffeaeab1c0b4f2f3a2255a385bf7087f48bc6266951d20eb96416-userdata-shm.mount: Deactivated successfully. Dec 5 05:07:48 localhost systemd[1]: run-netns-qdhcp\x2d94793bd6\x2d8f0c\x2d4da2\x2da172\x2da586f9ad1767.mount: Deactivated successfully. Dec 5 05:07:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:48.312 261902 INFO neutron.agent.dhcp.agent [None req-9155e86a-c31e-4e59-8df5-ffd9532280b2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:07:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:48.314 261902 INFO neutron.agent.dhcp.agent [None req-9155e86a-c31e-4e59-8df5-ffd9532280b2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:07:48 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 5 05:07:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:07:48.989 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:07:49 localhost nova_compute[280228]: 2025-12-05 10:07:49.090 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:49 localhost podman[239519]: time="2025-12-05T10:07:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:07:49 localhost podman[239519]: @ - - [05/Dec/2025:10:07:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157939 "" "Go-http-client/1.1" Dec 5 05:07:49 localhost podman[239519]: @ - - [05/Dec/2025:10:07:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19720 "" "Go-http-client/1.1" Dec 5 05:07:50 localhost sshd[313092]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:07:50 localhost ovn_controller[153000]: 2025-12-05T10:07:50Z|00192|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:07:50 localhost nova_compute[280228]: 2025-12-05 10:07:50.329 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:51 localhost nova_compute[280228]: 2025-12-05 10:07:51.966 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:52 localhost sshd[313094]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:07:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e124 do_prune osdmap full prune enabled Dec 5 05:07:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e125 e125: 6 total, 6 up, 6 in Dec 5 05:07:53 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e125: 6 total, 6 up, 6 in Dec 5 05:07:53 localhost nova_compute[280228]: 2025-12-05 10:07:53.646 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:54 localhost nova_compute[280228]: 2025-12-05 10:07:54.129 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e125 do_prune osdmap full prune enabled Dec 5 05:07:54 localhost sshd[313096]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:07:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e126 e126: 6 total, 6 up, 6 in Dec 5 05:07:54 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e126: 6 total, 6 up, 6 in Dec 5 05:07:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:07:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:07:56 localhost nova_compute[280228]: 2025-12-05 10:07:56.234 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:07:56 localhost podman[313099]: 2025-12-05 10:07:56.900748089 +0000 UTC m=+0.080785642 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64) Dec 5 05:07:56 localhost podman[313099]: 2025-12-05 10:07:56.915628704 +0000 UTC m=+0.095666267 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc.) Dec 5 05:07:56 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:07:56 localhost nova_compute[280228]: 2025-12-05 10:07:56.972 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:57 localhost podman[313098]: 2025-12-05 10:07:57.006847795 +0000 UTC m=+0.187767685 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 5 05:07:57 localhost podman[313098]: 2025-12-05 10:07:57.02274881 +0000 UTC m=+0.203668700 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:07:57 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:07:57 localhost openstack_network_exporter[241668]: ERROR 10:07:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:07:57 localhost openstack_network_exporter[241668]: ERROR 10:07:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:07:57 localhost openstack_network_exporter[241668]: ERROR 10:07:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:07:57 localhost openstack_network_exporter[241668]: ERROR 10:07:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:07:57 localhost openstack_network_exporter[241668]: Dec 5 05:07:57 localhost openstack_network_exporter[241668]: ERROR 10:07:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:07:57 localhost openstack_network_exporter[241668]: Dec 5 05:07:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Dec 5 05:07:57 localhost sshd[313137]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:07:59 localhost neutron_sriov_agent[254996]: 2025-12-05 10:07:59.144 2 INFO neutron.agent.securitygroups_rpc [None req-14d2ee81-d08b-413d-82bf-6598bbd8e1f9 aa5c05ef703b4c5c829b56913fd95190 4b28cfa3b851441a981f2fa213cf5388 - - default default] Security group member updated ['b0dca337-aa85-43fc-b2b1-0bd096c3b725']#033[00m Dec 5 05:07:59 localhost nova_compute[280228]: 2025-12-05 10:07:59.168 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:07:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Dec 5 05:07:59 localhost nova_compute[280228]: 2025-12-05 10:07:59.489 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:08:00 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:08:00 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:00 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:00.547 261902 INFO neutron.agent.linux.ip_lib [None req-401602aa-da72-4dd9-bd79-670fb9f26d3f - - - - - -] Device tape73f56f1-81 cannot be used as it has no MAC address#033[00m Dec 5 05:08:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e126 do_prune osdmap full prune enabled Dec 5 05:08:00 localhost nova_compute[280228]: 2025-12-05 10:08:00.573 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:00 localhost kernel: device tape73f56f1-81 entered promiscuous mode Dec 5 05:08:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e127 e127: 6 total, 6 up, 6 in Dec 5 05:08:00 localhost nova_compute[280228]: 2025-12-05 10:08:00.590 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:00 localhost NetworkManager[5960]: [1764929280.5934] manager: (tape73f56f1-81): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Dec 5 05:08:00 localhost ovn_controller[153000]: 2025-12-05T10:08:00Z|00193|binding|INFO|Claiming lport e73f56f1-819e-4bd8-b0f0-c76d6f6671cb for this chassis. Dec 5 05:08:00 localhost ovn_controller[153000]: 2025-12-05T10:08:00Z|00194|binding|INFO|e73f56f1-819e-4bd8-b0f0-c76d6f6671cb: Claiming unknown Dec 5 05:08:00 localhost systemd-udevd[313217]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:08:00 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e127: 6 total, 6 up, 6 in Dec 5 05:08:00 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:00.615 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-6273e462-83ab-41be-b934-cf2bff8ed29f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6273e462-83ab-41be-b934-cf2bff8ed29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5c1d3be695d4b14816e6ea97315cd6f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e58439ee-eccd-47f0-8ace-17ae10dd8289, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e73f56f1-819e-4bd8-b0f0-c76d6f6671cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:08:00 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:00.617 158820 INFO neutron.agent.ovn.metadata.agent [-] Port e73f56f1-819e-4bd8-b0f0-c76d6f6671cb in datapath 6273e462-83ab-41be-b934-cf2bff8ed29f bound to our chassis#033[00m Dec 5 05:08:00 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:00.620 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port dce920a3-c555-46a5-adaf-658bcb8178f1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:08:00 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:00.621 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6273e462-83ab-41be-b934-cf2bff8ed29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:08:00 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:00.621 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[f71c3a89-3e09-40c7-9026-1c1f9c3f88a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost ovn_controller[153000]: 2025-12-05T10:08:00Z|00195|binding|INFO|Setting lport e73f56f1-819e-4bd8-b0f0-c76d6f6671cb ovn-installed in OVS Dec 5 05:08:00 localhost ovn_controller[153000]: 2025-12-05T10:08:00Z|00196|binding|INFO|Setting lport e73f56f1-819e-4bd8-b0f0-c76d6f6671cb up in Southbound Dec 5 05:08:00 localhost nova_compute[280228]: 2025-12-05 10:08:00.633 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:00 localhost nova_compute[280228]: 2025-12-05 10:08:00.635 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost journal[228791]: ethtool ioctl error on tape73f56f1-81: No such device Dec 5 05:08:00 localhost nova_compute[280228]: 2025-12-05 10:08:00.693 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:00 localhost nova_compute[280228]: 2025-12-05 10:08:00.730 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:08:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:08:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:08:01 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:08:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:08:01 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:01 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 3973fa65-91ba-4a52-a9a7-38ee0995fb48 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:08:01 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 3973fa65-91ba-4a52-a9a7-38ee0995fb48 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:08:01 localhost ceph-mgr[286454]: [progress INFO root] Completed event 3973fa65-91ba-4a52-a9a7-38ee0995fb48 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:08:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:08:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:08:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 4.1 KiB/s wr, 52 op/s Dec 5 05:08:01 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:08:01 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:01 localhost podman[313306]: Dec 5 05:08:01 localhost podman[313306]: 2025-12-05 10:08:01.662999501 +0000 UTC m=+0.092533611 container create 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:08:01 localhost systemd[1]: Started libpod-conmon-3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117.scope. Dec 5 05:08:01 localhost podman[313306]: 2025-12-05 10:08:01.62045948 +0000 UTC m=+0.049993670 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:08:01 localhost systemd[1]: tmp-crun.YMJLcw.mount: Deactivated successfully. Dec 5 05:08:01 localhost systemd[1]: Started libcrun container. Dec 5 05:08:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba37949dcf31e57446022d7b167f1ef6fa1445f99fa854fd0b815c9220bd60a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:08:01 localhost podman[313306]: 2025-12-05 10:08:01.768976293 +0000 UTC m=+0.198510403 container init 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:01 localhost podman[313306]: 2025-12-05 10:08:01.783182377 +0000 UTC m=+0.212716487 container start 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:08:01 localhost dnsmasq[313325]: started, version 2.85 cachesize 150 Dec 5 05:08:01 localhost dnsmasq[313325]: DNS service limited to local subnets Dec 5 05:08:01 localhost dnsmasq[313325]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:08:01 localhost dnsmasq[313325]: warning: no upstream servers configured Dec 5 05:08:01 localhost dnsmasq-dhcp[313325]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:08:01 localhost dnsmasq[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/addn_hosts - 0 addresses Dec 5 05:08:01 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/host Dec 5 05:08:01 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/opts Dec 5 05:08:01 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:01.962 261902 INFO neutron.agent.dhcp.agent [None req-a1c68f23-9bdb-4288-919c-a2c32a83d518 - - - - - -] DHCP configuration for ports {'a8c0b264-bb40-45b7-a003-6bce11f26921'} is completed#033[00m Dec 5 05:08:02 localhost nova_compute[280228]: 2025-12-05 10:08:02.006 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:02 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:02.281 2 INFO neutron.agent.securitygroups_rpc [None req-1102cf77-141c-44b1-a68f-102edd6b04af aa5c05ef703b4c5c829b56913fd95190 4b28cfa3b851441a981f2fa213cf5388 - - default default] Security group member updated ['b0dca337-aa85-43fc-b2b1-0bd096c3b725']#033[00m Dec 5 05:08:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 3.6 KiB/s wr, 46 op/s Dec 5 05:08:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:03.914 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:08:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:03.915 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:08:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:03.915 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:08:04 localhost nova_compute[280228]: 2025-12-05 10:08:04.172 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:04 localhost nova_compute[280228]: 2025-12-05 10:08:04.978 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 39 op/s Dec 5 05:08:05 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:08:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:08:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:05 localhost nova_compute[280228]: 2025-12-05 10:08:05.919 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:06 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:08:07 localhost nova_compute[280228]: 2025-12-05 10:08:07.036 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:08 localhost ovn_controller[153000]: 2025-12-05T10:08:08Z|00197|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:08 localhost nova_compute[280228]: 2025-12-05 10:08:08.501 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:08.521 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:08:08Z, description=, device_id=e9b9b46c-c6b1-4f5a-aa9e-6274b5da1754, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4277a41a-cfa7-4464-aae1-51046c5182e7, ip_allocation=immediate, mac_address=fa:16:3e:68:89:43, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:07:57Z, description=, dns_domain=, id=6273e462-83ab-41be-b934-cf2bff8ed29f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-817477016-network, port_security_enabled=True, project_id=e5c1d3be695d4b14816e6ea97315cd6f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8261, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1327, status=ACTIVE, subnets=['9ef1fb91-7d20-4796-8b22-353ff0ecac93'], tags=[], tenant_id=e5c1d3be695d4b14816e6ea97315cd6f, updated_at=2025-12-05T10:07:58Z, vlan_transparent=None, network_id=6273e462-83ab-41be-b934-cf2bff8ed29f, port_security_enabled=False, project_id=e5c1d3be695d4b14816e6ea97315cd6f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1393, status=DOWN, tags=[], tenant_id=e5c1d3be695d4b14816e6ea97315cd6f, updated_at=2025-12-05T10:08:08Z on network 6273e462-83ab-41be-b934-cf2bff8ed29f#033[00m Dec 5 05:08:08 localhost dnsmasq[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/addn_hosts - 1 addresses Dec 5 05:08:08 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/host Dec 5 05:08:08 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/opts Dec 5 05:08:08 localhost podman[313341]: 2025-12-05 10:08:08.767342081 +0000 UTC m=+0.061657517 container kill 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:09.052 261902 INFO neutron.agent.dhcp.agent [None req-8f3009db-28e3-4d59-8837-35faa4df51aa - - - - - -] DHCP configuration for ports {'4277a41a-cfa7-4464-aae1-51046c5182e7'} is completed#033[00m Dec 5 05:08:09 localhost nova_compute[280228]: 2025-12-05 10:08:09.176 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:10 localhost ovn_controller[153000]: 2025-12-05T10:08:10Z|00198|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:10 localhost nova_compute[280228]: 2025-12-05 10:08:10.170 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:10 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:10.616 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:08:10 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:10.618 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:08:10 localhost nova_compute[280228]: 2025-12-05 10:08:10.616 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:10 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:10.772 2 INFO neutron.agent.securitygroups_rpc [None req-47d393f7-6bb2-4cd6-87e1-92a7bd760d87 866ff8446ba1414ca637bc2541e2b20c 1e403462f5fd4d6cbcd026f0f727dd2a - - default default] Security group member updated ['5c745050-d1d8-421b-8aa7-80574a4f3dcd']#033[00m Dec 5 05:08:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:10.831 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:08:08Z, description=, device_id=e9b9b46c-c6b1-4f5a-aa9e-6274b5da1754, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4277a41a-cfa7-4464-aae1-51046c5182e7, ip_allocation=immediate, mac_address=fa:16:3e:68:89:43, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:07:57Z, description=, dns_domain=, id=6273e462-83ab-41be-b934-cf2bff8ed29f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-817477016-network, port_security_enabled=True, project_id=e5c1d3be695d4b14816e6ea97315cd6f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=8261, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1327, status=ACTIVE, subnets=['9ef1fb91-7d20-4796-8b22-353ff0ecac93'], tags=[], tenant_id=e5c1d3be695d4b14816e6ea97315cd6f, updated_at=2025-12-05T10:07:58Z, vlan_transparent=None, network_id=6273e462-83ab-41be-b934-cf2bff8ed29f, port_security_enabled=False, project_id=e5c1d3be695d4b14816e6ea97315cd6f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1393, status=DOWN, tags=[], tenant_id=e5c1d3be695d4b14816e6ea97315cd6f, updated_at=2025-12-05T10:08:08Z on network 6273e462-83ab-41be-b934-cf2bff8ed29f#033[00m Dec 5 05:08:11 localhost dnsmasq[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/addn_hosts - 1 addresses Dec 5 05:08:11 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/host Dec 5 05:08:11 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/opts Dec 5 05:08:11 localhost podman[313380]: 2025-12-05 10:08:11.08825591 +0000 UTC m=+0.056463258 container kill 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:08:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:11.338 261902 INFO neutron.agent.dhcp.agent [None req-5ff4e0cb-020f-46ef-bfb7-15abde5c008d - - - - - -] DHCP configuration for ports {'4277a41a-cfa7-4464-aae1-51046c5182e7'} is completed#033[00m Dec 5 05:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:08:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:11.674 261902 INFO neutron.agent.linux.ip_lib [None req-e2eac1fe-d31d-4ef8-a34a-7e3aa590cbe2 - - - - - -] Device tap068e8d44-ef cannot be used as it has no MAC address#033[00m Dec 5 05:08:11 localhost systemd[1]: tmp-crun.3IkCeL.mount: Deactivated successfully. Dec 5 05:08:11 localhost podman[313403]: 2025-12-05 10:08:11.708677467 +0000 UTC m=+0.103938710 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.707 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost kernel: device tap068e8d44-ef entered promiscuous mode Dec 5 05:08:11 localhost ovn_controller[153000]: 2025-12-05T10:08:11Z|00199|binding|INFO|Claiming lport 068e8d44-efdb-4aca-87ca-3b5e6009fd59 for this chassis. Dec 5 05:08:11 localhost ovn_controller[153000]: 2025-12-05T10:08:11Z|00200|binding|INFO|068e8d44-efdb-4aca-87ca-3b5e6009fd59: Claiming unknown Dec 5 05:08:11 localhost NetworkManager[5960]: [1764929291.7195] manager: (tap068e8d44-ef): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Dec 5 05:08:11 localhost systemd-udevd[313451]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.722 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost ovn_controller[153000]: 2025-12-05T10:08:11Z|00201|binding|INFO|Setting lport 068e8d44-efdb-4aca-87ca-3b5e6009fd59 ovn-installed in OVS Dec 5 05:08:11 localhost ovn_controller[153000]: 2025-12-05T10:08:11Z|00202|binding|INFO|Setting lport 068e8d44-efdb-4aca-87ca-3b5e6009fd59 up in Southbound Dec 5 05:08:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:11.728 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-781f4cf1-2e11-488d-ab63-fac6b9c67b6c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-781f4cf1-2e11-488d-ab63-fac6b9c67b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8da57e2736240a0ac7055e85adea6da', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7632c897-a912-4427-a8fb-325ad3ea4a25, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=068e8d44-efdb-4aca-87ca-3b5e6009fd59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:08:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:11.730 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 068e8d44-efdb-4aca-87ca-3b5e6009fd59 in datapath 781f4cf1-2e11-488d-ab63-fac6b9c67b6c bound to our chassis#033[00m Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.730 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:11.733 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 781f4cf1-2e11-488d-ab63-fac6b9c67b6c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.734 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:11.734 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[cefde1fb-1b32-4eba-a1df-b00e89918a06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:08:11 localhost podman[313403]: 2025-12-05 10:08:11.739462239 +0000 UTC m=+0.134723332 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.758 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost journal[228791]: ethtool ioctl error on tap068e8d44-ef: No such device Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.803 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost podman[313405]: 2025-12-05 10:08:11.771590121 +0000 UTC m=+0.162740489 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 05:08:11 localhost nova_compute[280228]: 2025-12-05 10:08:11.844 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:11 localhost podman[313405]: 2025-12-05 10:08:11.851359331 +0000 UTC m=+0.242509719 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:08:11 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:08:11 localhost podman[313402]: 2025-12-05 10:08:11.823327524 +0000 UTC m=+0.223631242 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:08:11 localhost podman[313402]: 2025-12-05 10:08:11.908654193 +0000 UTC m=+0.308957941 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:08:11 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:08:12 localhost nova_compute[280228]: 2025-12-05 10:08:12.086 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:12 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:12.414 2 INFO neutron.agent.securitygroups_rpc [None req-81969956-331a-4777-bdc3-13cdd435ac12 866ff8446ba1414ca637bc2541e2b20c 1e403462f5fd4d6cbcd026f0f727dd2a - - default default] Security group member updated ['5c745050-d1d8-421b-8aa7-80574a4f3dcd']#033[00m Dec 5 05:08:12 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:12.486 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:12 localhost podman[313540]: Dec 5 05:08:12 localhost podman[313540]: 2025-12-05 10:08:12.706893139 +0000 UTC m=+0.070388544 container create 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 5 05:08:12 localhost systemd[1]: Started libpod-conmon-28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944.scope. Dec 5 05:08:12 localhost podman[313540]: 2025-12-05 10:08:12.664466751 +0000 UTC m=+0.027962186 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:08:12 localhost systemd[1]: Started libcrun container. Dec 5 05:08:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcdf80542fcf1c81fd0e50b65070739eaeaca537ded62682a3de03cba1906e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:08:12 localhost podman[313540]: 2025-12-05 10:08:12.782508942 +0000 UTC m=+0.146004357 container init 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:12 localhost podman[313540]: 2025-12-05 10:08:12.791394803 +0000 UTC m=+0.154890208 container start 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:08:12 localhost dnsmasq[313559]: started, version 2.85 cachesize 150 Dec 5 05:08:12 localhost dnsmasq[313559]: DNS service limited to local subnets Dec 5 05:08:12 localhost dnsmasq[313559]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:08:12 localhost dnsmasq[313559]: warning: no upstream servers configured Dec 5 05:08:12 localhost dnsmasq-dhcp[313559]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d Dec 5 05:08:12 localhost dnsmasq[313559]: read /var/lib/neutron/dhcp/781f4cf1-2e11-488d-ab63-fac6b9c67b6c/addn_hosts - 0 addresses Dec 5 05:08:12 localhost dnsmasq-dhcp[313559]: read /var/lib/neutron/dhcp/781f4cf1-2e11-488d-ab63-fac6b9c67b6c/host Dec 5 05:08:12 localhost dnsmasq-dhcp[313559]: read /var/lib/neutron/dhcp/781f4cf1-2e11-488d-ab63-fac6b9c67b6c/opts Dec 5 05:08:12 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:12.929 261902 INFO neutron.agent.dhcp.agent [None req-802b2030-3fb3-4027-b750-6d92a2b61f5e - - - - - -] DHCP configuration for ports {'0c84dc24-3278-425c-946e-3104bda54a2b'} is completed#033[00m Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.951 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.952 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.983 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.983 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bb323a03-df13-467f-8266-27c76f7ced1f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:12.952840', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f1fe6d8-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '1e23a6e8b963194925b13696ce11b2261bff68115cc769d9fc5ed6ff098b9c9d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:12.952840', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f1ffa42-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '482ec18b6a1871751a0256a99d5ebe4f57f330ad699f03c752f745e8e678b59c'}]}, 'timestamp': '2025-12-05 10:08:12.984456', '_unique_id': '1138d563c4894e9d9a2ab23c7f83cac3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.986 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.987 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.987 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.988 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f038572-f885-42c1-bc33-5c12faf70d58', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:12.987660', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f208c0a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': 'f67a939d91094b9c258ad0b3a4876bb78577d24474150195a8c72657dfa87bdb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:12.987660', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f209f38-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '2c38021dafa2cdb4ea07cd9c469b3c7825c0617c7fe15777345953b7c82efae1'}]}, 'timestamp': '2025-12-05 10:08:12.988614', '_unique_id': 'c2364d80e94941998df4cef72f2d0c3a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.989 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:12.990 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.008 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 16270000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f6637bc-8277-4fc1-a2ae-d9e9b54c75e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16270000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:08:12.990975', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '4f23bd3a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.182725416, 'message_signature': 'e6b2afe820c339a78a0461bc5eb2d66dedc575bf7520b8f31786304639073c2c'}]}, 'timestamp': '2025-12-05 10:08:13.009068', '_unique_id': '42d9adc5c49a46a6a25dcc750883bf51'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.011 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.011 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.014 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'de1887c0-9045-48e8-8676-1e922beb7f2f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.011629', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f24ae2a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': '66a10416abe784b81b7bacdfa69a64da13254b148f942268d17599def7c80116'}]}, 'timestamp': '2025-12-05 10:08:13.015239', '_unique_id': '4e53f8fd1526460bbcd728d8da3ca05a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.017 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.018 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd956aa2b-142d-40bb-90f3-3e04872477f1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.017982', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f252c92-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': '2f849e4e75095a67a5b10b0e7b3202508a7f299db0c8c6754321f867ab86d4f9'}]}, 'timestamp': '2025-12-05 10:08:13.018508', '_unique_id': '3874d91658ce41f5badd882e1a22aeff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '06739d51-21f4-444c-8262-b3990396db73', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.021076', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f278104-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.195386343, 'message_signature': 'f6aec2e6d0a459639f2710116fa831dd7226496ecda9f22b9e8147951521a1e7'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.021076', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f279806-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.195386343, 'message_signature': '2b6f3b2e311f625cf41beb3344588a77ca629576e9fa7cf0c98390a77570d9c0'}]}, 'timestamp': '2025-12-05 10:08:13.034353', '_unique_id': 'e7ec383dd8774bcd9410ba00ea75ed2f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.035 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.037 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.037 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.038 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d822bfe-4087-48ef-81bb-37e2491fcad6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.037566', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f2829c4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '4a9131b4abf30f6a7f3568b309d7c26063529b379dd820856c80b9449127098d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.037566', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f284080-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': 'fa0b273df3aa581d8ef976142e122668d333a71d29629a2d811a7ed3a761e964'}]}, 'timestamp': '2025-12-05 10:08:13.038623', '_unique_id': '0c638e36b7ec4866a0df30c5b6ba4a14'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.041 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01d139af-2a07-4f33-980c-d9c74de0ad68', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.041297', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f28bc72-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '67199f3b3a7b9fd657e33cfbe100107b023142b1204e896f92f3954472c139df'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.041297', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f28cdb6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': 'b696076941869c948600fe20d7c347d6ce29bc9c143d209a21a963d5c9368bf8'}]}, 'timestamp': '2025-12-05 10:08:13.042239', '_unique_id': '28c77c67fcdb4748b7983a37d464b771'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '86101f76-286a-4158-b70f-3b3f500ce068', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.045573', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f296550-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': '9944e61930d5f513fb928a579839a169a77bb3f8662229495596ab1becac3ccf'}]}, 'timestamp': '2025-12-05 10:08:13.046154', '_unique_id': 'ae24a6a72c5b45f594cdee5e69d076bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.048 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '419ba52c-decf-4a85-8391-9a740c877d85', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.048833', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f29e174-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': 'c7254c4eaa573c064d25a951dce3ccc89acc452e365ef6646c33eb1852189932'}]}, 'timestamp': '2025-12-05 10:08:13.049423', '_unique_id': '43930ceeb28648a59e00733fccbe17ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.050 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.051 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd941e84b-b533-4854-809d-9a903b2e8302', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:08:13.052135', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4f2a75a8-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.182725416, 'message_signature': '61fa3fb23e639bc12e2aac256896b92fccda7bc9a4d4c4b46441668a297e2d7d'}]}, 'timestamp': '2025-12-05 10:08:13.053095', '_unique_id': 'b28cd5334416470e837579e854710b7e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.055 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.055 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '76911086-6c5a-4e90-9224-c9aba7d4fdc6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.055229', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f2ad926-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': '83f5ce57e1c1462c560a004c7ec15a95d23a4df8f0cc2c50c35828b934be834d'}]}, 'timestamp': '2025-12-05 10:08:13.055564', '_unique_id': '38f4a2c3f6114acaa7c5865d4e6d13a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1140188b-fd74-44c2-b0dc-2792e9f1267d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.057106', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f2b21c4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': '6a9c70dd917819b18cbcdfac1465c0b09c8518c67ee997e0110a5a1f250b388a'}]}, 'timestamp': '2025-12-05 10:08:13.057489', '_unique_id': 'a93c7383f2a641d8a4911a3940ff76b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.059 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.059 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0fc4e293-85b3-4f53-9f47-46cd78608edb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.059796', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f2b8cfe-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': 'ae9c5ad6b472cb6ea926c87f23c13fe63ab6c1788e87143a255f6b9feb5a7b34'}]}, 'timestamp': '2025-12-05 10:08:13.060390', '_unique_id': 'aefd4a99739e48659365e8c4b774cd5b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.062 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.062 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6e11791a-5b0f-4fc7-8624-1297ae660b60', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.062479', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f2bf284-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '0fc90eb785e1a4fd1848cff02ea07af4e07fff94c2344d0560a6044bf875c4f0'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.062479', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f2bfc70-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '28e2a3c4ee053972fb3da12cc46d47d0b8e59e80238bfaad4f899f1f55756697'}]}, 'timestamp': '2025-12-05 10:08:13.062996', '_unique_id': '5f1e6bb9fec542929f68f797a83215c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.064 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd4c645f5-55d0-4a9a-a0a0-61bb159eff90', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.064469', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f2c4040-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.195386343, 'message_signature': 'da774702f25ed6dda033e81741fd96f8b11e18df99b8a6d21924866272519540'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.064469', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f2c4a40-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.195386343, 'message_signature': '51d9d996cb676823a4badd08c82dd1a0efbdf72e3c6a8a922d9de627464f7360'}]}, 'timestamp': '2025-12-05 10:08:13.064988', '_unique_id': 'a380107777e544a8a1063d7ba96624ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8bc0bc12-9733-49ce-aac6-20dd80b02421', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.066426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f2c8cd0-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '1ecc9531a20518d7342e1966132773eda1a6934e8414868b35685a0bbcf278ab'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.066426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f2c96bc-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.127230418, 'message_signature': '30fc80dcb3c4c3ed36317d14e09b4a04285bde989513225c6bc0321636e754cb'}]}, 'timestamp': '2025-12-05 10:08:13.066945', '_unique_id': 'd5262637823549ddb0358d13d259a809'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.068 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.068 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.068 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '85f36cc9-5b7d-48f7-9b18-43af8036503d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.068492', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f2cddc0-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': 'c97ce9dd3c5a312d2c5bcdcf543160830ad98d8c48c13ad04681b0f86e753f88'}]}, 'timestamp': '2025-12-05 10:08:13.068783', '_unique_id': 'e895769609e049779741db2c17432877'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.070 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '32f4392c-b23a-4ff7-8ba7-f53fd285d340', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:08:13.070124', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4f2d1dc6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.195386343, 'message_signature': '05a562a33c9265143ee91574dc555a8f28fea1e22f552ef89201840700b8d88c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:08:13.070124', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4f2d2a64-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.195386343, 'message_signature': '435f4ac0383feacb8fa9f2c16861d0c2f2e3a22718d15b87d09a6e58c4f5bd65'}]}, 'timestamp': '2025-12-05 10:08:13.070732', '_unique_id': '7a54707945204691b2d14170f23223e7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.072 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4a19413-7f32-450d-acfc-cb9d3c4cde1f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.072095', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f2d6a2e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': '1abd9b62586ba0d222e0b1fbf0e8206659e536eaa05661e62374a13a0f46f293'}]}, 'timestamp': '2025-12-05 10:08:13.072414', '_unique_id': 'e15a9baba84a4797b82d36afdb5cdd30'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a72ae179-740b-41ce-aaa7-5033dd334981', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:08:13.073847', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '4f2dae94-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12408.185933274, 'message_signature': 'd03478fb32e472c84884b3900a6c2dfdd952a7658ff4c8eaabc64e11485ee7dd'}]}, 'timestamp': '2025-12-05 10:08:13.074130', '_unique_id': 'fe44b9afff674e8db42b190e0692dd35'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:08:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:08:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:08:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:13 localhost nova_compute[280228]: 2025-12-05 10:08:13.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:14 localhost nova_compute[280228]: 2025-12-05 10:08:14.178 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:14 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:14.587 2 INFO neutron.agent.securitygroups_rpc [None req-d79b5d17-8f31-461a-b302-40e91cde7849 a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']#033[00m Dec 5 05:08:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:08:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:08:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:08:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:08:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:08:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:08:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.616 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.616 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.617 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:08:16 localhost nova_compute[280228]: 2025-12-05 10:08:16.617 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:08:17 localhost nova_compute[280228]: 2025-12-05 10:08:17.090 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e127 do_prune osdmap full prune enabled Dec 5 05:08:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e128 e128: 6 total, 6 up, 6 in Dec 5 05:08:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e128: 6 total, 6 up, 6 in Dec 5 05:08:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:08:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.188 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.204 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.205 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.206 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:18 localhost podman[313560]: 2025-12-05 10:08:18.21226469 +0000 UTC m=+0.096106910 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.275 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.276 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.276 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.276 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.276 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:08:18 localhost podman[313561]: 2025-12-05 10:08:18.281986652 +0000 UTC m=+0.159958013 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:08:18 localhost podman[313560]: 2025-12-05 10:08:18.291341459 +0000 UTC m=+0.175183729 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:18 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:08:18 localhost podman[313561]: 2025-12-05 10:08:18.314703723 +0000 UTC m=+0.192675084 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:08:18 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:08:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e128 do_prune osdmap full prune enabled Dec 5 05:08:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e129 e129: 6 total, 6 up, 6 in Dec 5 05:08:18 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e129: 6 total, 6 up, 6 in Dec 5 05:08:18 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:18.513 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:18 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:18.620 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:08:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:08:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3255465608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.685 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.750 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.751 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.977 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.979 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11213MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.980 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:08:18 localhost nova_compute[280228]: 2025-12-05 10:08:18.980 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.068 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.069 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.069 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.113 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.182 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:19 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:19.504 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:08:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/28325255' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.587 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.593 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.622 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.626 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.627 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:08:19 localhost podman[239519]: time="2025-12-05T10:08:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:08:19 localhost podman[239519]: @ - - [05/Dec/2025:10:08:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161587 "" "Go-http-client/1.1" Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.929 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.931 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:19 localhost nova_compute[280228]: 2025-12-05 10:08:19.932 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:08:19 localhost podman[239519]: @ - - [05/Dec/2025:10:08:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20685 "" "Go-http-client/1.1" Dec 5 05:08:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:08:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2819140810' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:08:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:08:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2819140810' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:08:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:21.071 2 INFO neutron.agent.securitygroups_rpc [None req-89312cbd-ae39-4130-8ecf-dec06673918d a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']#033[00m Dec 5 05:08:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 767 B/s wr, 5 op/s Dec 5 05:08:21 localhost nova_compute[280228]: 2025-12-05 10:08:21.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:21 localhost nova_compute[280228]: 2025-12-05 10:08:21.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:22 localhost nova_compute[280228]: 2025-12-05 10:08:22.095 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:22 localhost nova_compute[280228]: 2025-12-05 10:08:22.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:22.914 2 INFO neutron.agent.securitygroups_rpc [None req-42abf360-ac08-4368-8817-586fcc6d5d65 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:08:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 21 op/s Dec 5 05:08:23 localhost nova_compute[280228]: 2025-12-05 10:08:23.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:08:24 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:24.095 2 INFO neutron.agent.securitygroups_rpc [None req-cb152c80-0262-4769-8dfc-c51753d9b263 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:08:24 localhost nova_compute[280228]: 2025-12-05 10:08:24.187 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 21 op/s Dec 5 05:08:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e129 do_prune osdmap full prune enabled Dec 5 05:08:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 e130: 6 total, 6 up, 6 in Dec 5 05:08:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e130: 6 total, 6 up, 6 in Dec 5 05:08:27 localhost nova_compute[280228]: 2025-12-05 10:08:27.099 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:08:27 localhost openstack_network_exporter[241668]: ERROR 10:08:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:08:27 localhost openstack_network_exporter[241668]: ERROR 10:08:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:08:27 localhost openstack_network_exporter[241668]: ERROR 10:08:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:08:27 localhost openstack_network_exporter[241668]: ERROR 10:08:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:08:27 localhost openstack_network_exporter[241668]: Dec 5 05:08:27 localhost openstack_network_exporter[241668]: ERROR 10:08:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:08:27 localhost openstack_network_exporter[241668]: Dec 5 05:08:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 58 op/s Dec 5 05:08:27 localhost systemd[1]: tmp-crun.dfCfbL.mount: Deactivated successfully. Dec 5 05:08:27 localhost podman[313649]: 2025-12-05 10:08:27.22589575 +0000 UTC m=+0.105587931 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_id=multipathd) Dec 5 05:08:27 localhost systemd[1]: tmp-crun.TGqgdQ.mount: Deactivated successfully. Dec 5 05:08:27 localhost podman[313650]: 2025-12-05 10:08:27.277507528 +0000 UTC m=+0.152965899 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 05:08:27 localhost podman[313649]: 2025-12-05 10:08:27.293973262 +0000 UTC m=+0.173665443 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:08:27 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:08:27 localhost podman[313650]: 2025-12-05 10:08:27.320737291 +0000 UTC m=+0.196195692 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.) Dec 5 05:08:27 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:08:28 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:28.295 261902 INFO neutron.agent.linux.ip_lib [None req-7dd69314-cca1-4d26-a0fd-bb5620a95cff - - - - - -] Device tapa8cf50c6-dc cannot be used as it has no MAC address#033[00m Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.363 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost kernel: device tapa8cf50c6-dc entered promiscuous mode Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.370 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost NetworkManager[5960]: [1764929308.3718] manager: (tapa8cf50c6-dc): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Dec 5 05:08:28 localhost systemd-udevd[313723]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.380 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost dnsmasq[311885]: exiting on receipt of SIGTERM Dec 5 05:08:28 localhost podman[313710]: 2025-12-05 10:08:28.396968419 +0000 UTC m=+0.098159433 container kill 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:08:28 localhost systemd[1]: libpod-79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f.scope: Deactivated successfully. Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.406 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost journal[228791]: ethtool ioctl error on tapa8cf50c6-dc: No such device Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.454 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost podman[313729]: 2025-12-05 10:08:28.461666658 +0000 UTC m=+0.052445055 container died 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.497 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost podman[313729]: 2025-12-05 10:08:28.552626211 +0000 UTC m=+0.143404568 container cleanup 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:08:28 localhost systemd[1]: libpod-conmon-79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f.scope: Deactivated successfully. Dec 5 05:08:28 localhost podman[313738]: 2025-12-05 10:08:28.580623787 +0000 UTC m=+0.159843811 container remove 79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1556a36f-4e50-4389-b215-7fd51744a1e6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:08:28 localhost ovn_controller[153000]: 2025-12-05T10:08:28Z|00203|binding|INFO|Releasing lport 8f89be47-4f84-45a7-9e9f-25dddaefcac5 from this chassis (sb_readonly=0) Dec 5 05:08:28 localhost kernel: device tap8f89be47-4f left promiscuous mode Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.595 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost ovn_controller[153000]: 2025-12-05T10:08:28Z|00204|binding|INFO|Setting lport 8f89be47-4f84-45a7-9e9f-25dddaefcac5 down in Southbound Dec 5 05:08:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:28.603 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-1556a36f-4e50-4389-b215-7fd51744a1e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1556a36f-4e50-4389-b215-7fd51744a1e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd675cefa63e14882bc0ebe68b22ac36a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a87d6b02-c0f4-46f5-b440-dbd829b2c81f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8f89be47-4f84-45a7-9e9f-25dddaefcac5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:08:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:28.606 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 8f89be47-4f84-45a7-9e9f-25dddaefcac5 in datapath 1556a36f-4e50-4389-b215-7fd51744a1e6 unbound from our chassis#033[00m Dec 5 05:08:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:28.611 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1556a36f-4e50-4389-b215-7fd51744a1e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:08:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:28.612 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad78ac1-3d99-4c66-a1b5-581d67966166]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:08:28 localhost nova_compute[280228]: 2025-12-05 10:08:28.614 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:28 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:28.817 261902 INFO neutron.agent.dhcp.agent [None req-019c13a4-488c-44e2-a08a-f45dc0dfe6eb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:28 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:28.825 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:29 localhost nova_compute[280228]: 2025-12-05 10:08:29.152 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:29 localhost nova_compute[280228]: 2025-12-05 10:08:29.189 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 51 op/s Dec 5 05:08:29 localhost systemd[1]: var-lib-containers-storage-overlay-a98da5e9431ad66f21bd67c9140372fb22820915c81d27abb4d0c60666a148dc-merged.mount: Deactivated successfully. Dec 5 05:08:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79946442c11d03f56f5ca6e1ebdd82ba04515958384402786740c4613537c48f-userdata-shm.mount: Deactivated successfully. Dec 5 05:08:29 localhost systemd[1]: run-netns-qdhcp\x2d1556a36f\x2d4e50\x2d4389\x2db215\x2d7fd51744a1e6.mount: Deactivated successfully. Dec 5 05:08:29 localhost podman[313817]: Dec 5 05:08:29 localhost podman[313817]: 2025-12-05 10:08:29.349577136 +0000 UTC m=+0.076994145 container create 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:08:29 localhost systemd[1]: Started libpod-conmon-7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7.scope. Dec 5 05:08:29 localhost systemd[1]: Started libcrun container. Dec 5 05:08:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e124c41f0ad928cc51e60595ffa331d50f7c0f2490046b61b229239b153c749/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:08:29 localhost podman[313817]: 2025-12-05 10:08:29.317955159 +0000 UTC m=+0.045372218 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:08:29 localhost podman[313817]: 2025-12-05 10:08:29.456993612 +0000 UTC m=+0.184410651 container init 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 05:08:29 localhost podman[313817]: 2025-12-05 10:08:29.465579104 +0000 UTC m=+0.192996133 container start 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:08:29 localhost dnsmasq[313836]: started, version 2.85 cachesize 150 Dec 5 05:08:29 localhost dnsmasq[313836]: DNS service limited to local subnets Dec 5 05:08:29 localhost dnsmasq[313836]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:08:29 localhost dnsmasq[313836]: warning: no upstream servers configured Dec 5 05:08:29 localhost dnsmasq[313836]: read /var/lib/neutron/dhcp/cb50e4bf-614d-47a9-b379-72762047e21a/addn_hosts - 0 addresses Dec 5 05:08:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:29.542 261902 INFO neutron.agent.dhcp.agent [None req-c887473b-f099-494f-b9ff-6300185b4e9c - - - - - -] DHCP configuration for ports {'557e7e5d-18ef-44d9-9c33-85a2d7488c96'} is completed#033[00m Dec 5 05:08:29 localhost dnsmasq[313836]: exiting on receipt of SIGTERM Dec 5 05:08:29 localhost systemd[1]: libpod-7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7.scope: Deactivated successfully. Dec 5 05:08:29 localhost podman[313852]: 2025-12-05 10:08:29.683409797 +0000 UTC m=+0.048010529 container kill 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:29.710 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:29 localhost podman[313866]: 2025-12-05 10:08:29.747159787 +0000 UTC m=+0.049873997 container died 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:29 localhost podman[313866]: 2025-12-05 10:08:29.780877858 +0000 UTC m=+0.083592078 container cleanup 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:08:29 localhost systemd[1]: libpod-conmon-7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7.scope: Deactivated successfully. Dec 5 05:08:29 localhost podman[313867]: 2025-12-05 10:08:29.822167112 +0000 UTC m=+0.121210440 container remove 7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cb50e4bf-614d-47a9-b379-72762047e21a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:08:29 localhost kernel: device tapa8cf50c6-dc left promiscuous mode Dec 5 05:08:29 localhost nova_compute[280228]: 2025-12-05 10:08:29.832 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:29 localhost nova_compute[280228]: 2025-12-05 10:08:29.842 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:29.857 261902 INFO neutron.agent.dhcp.agent [None req-1c136a36-6143-4fca-9246-0033dbb32bfa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:29 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:29.858 261902 INFO neutron.agent.dhcp.agent [None req-1c136a36-6143-4fca-9246-0033dbb32bfa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:30 localhost systemd[1]: var-lib-containers-storage-overlay-3e124c41f0ad928cc51e60595ffa331d50f7c0f2490046b61b229239b153c749-merged.mount: Deactivated successfully. Dec 5 05:08:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d4cfffeedcf7f0a57540f3dc0009ad722f633a100e70de7872814f2e1cf4dd7-userdata-shm.mount: Deactivated successfully. Dec 5 05:08:30 localhost systemd[1]: run-netns-qdhcp\x2dcb50e4bf\x2d614d\x2d47a9\x2db379\x2d72762047e21a.mount: Deactivated successfully. Dec 5 05:08:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:30 localhost ovn_controller[153000]: 2025-12-05T10:08:30Z|00205|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:30 localhost nova_compute[280228]: 2025-12-05 10:08:30.702 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 2.7 KiB/s wr, 64 op/s Dec 5 05:08:32 localhost nova_compute[280228]: 2025-12-05 10:08:32.117 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:08:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/643600439' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:08:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:08:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/643600439' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:08:32 localhost nova_compute[280228]: 2025-12-05 10:08:32.938 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 52 op/s Dec 5 05:08:34 localhost nova_compute[280228]: 2025-12-05 10:08:34.234 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 52 op/s Dec 5 05:08:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:37 localhost nova_compute[280228]: 2025-12-05 10:08:37.119 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 2.4 KiB/s wr, 59 op/s Dec 5 05:08:38 localhost ovn_controller[153000]: 2025-12-05T10:08:38Z|00206|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:38 localhost nova_compute[280228]: 2025-12-05 10:08:38.439 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Dec 5 05:08:39 localhost nova_compute[280228]: 2025-12-05 10:08:39.237 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:40 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:40.230 2 INFO neutron.agent.securitygroups_rpc [None req-d408bbe4-2cf4-4918-b5cf-de5f65a1893c 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:08:40 localhost ovn_controller[153000]: 2025-12-05T10:08:40Z|00207|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:40 localhost nova_compute[280228]: 2025-12-05 10:08:40.581 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s Dec 5 05:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:08:42 localhost nova_compute[280228]: 2025-12-05 10:08:42.122 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:42 localhost podman[313894]: 2025-12-05 10:08:42.25298748 +0000 UTC m=+0.132001268 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:08:42 localhost podman[313894]: 2025-12-05 10:08:42.262540312 +0000 UTC m=+0.141554110 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:08:42 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:08:42 localhost podman[313896]: 2025-12-05 10:08:42.305370813 +0000 UTC m=+0.180802152 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 05:08:42 localhost podman[313896]: 2025-12-05 10:08:42.319714051 +0000 UTC m=+0.195145460 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:42 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:08:42 localhost podman[313895]: 2025-12-05 10:08:42.221031093 +0000 UTC m=+0.100843925 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 5 05:08:42 localhost podman[313895]: 2025-12-05 10:08:42.40596296 +0000 UTC m=+0.285775842 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:08:42 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:08:43 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:43.118 2 INFO neutron.agent.securitygroups_rpc [None req-c6e6df27-e7a4-4cda-a4a7-63c4cb2afad8 a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']#033[00m Dec 5 05:08:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 341 B/s wr, 14 op/s Dec 5 05:08:43 localhost ovn_controller[153000]: 2025-12-05T10:08:43Z|00208|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:43 localhost nova_compute[280228]: 2025-12-05 10:08:43.349 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:44 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:44.017 2 INFO neutron.agent.securitygroups_rpc [None req-7022ece7-3195-4730-b179-1dbc71a25f59 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:08:44 localhost nova_compute[280228]: 2025-12-05 10:08:44.241 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:08:45 Dec 5 05:08:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:08:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:08:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['vms', 'manila_metadata', 'images', 'backups', '.mgr', 'manila_data', 'volumes'] Dec 5 05:08:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:08:45 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:45.138 2 INFO neutron.agent.securitygroups_rpc [None req-cae0c3b7-4067-4aed-9cef-3c85cca03ef5 a052c73754704caaa399378c7e50192a f8da57e2736240a0ac7055e85adea6da - - default default] Security group member updated ['4a643f0b-9f81-4463-adc9-4f8f421f9506']#033[00m Dec 5 05:08:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:08:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:08:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:08:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:08:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:08:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:08:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:08:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:08:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:08:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:46 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:46.301 2 INFO neutron.agent.securitygroups_rpc [None req-237dfc96-41d1-4f6c-9c3e-9577e60244ef 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:08:46 localhost dnsmasq[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/addn_hosts - 0 addresses Dec 5 05:08:46 localhost podman[313966]: 2025-12-05 10:08:46.912389377 +0000 UTC m=+0.063880305 container kill 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:08:46 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/host Dec 5 05:08:46 localhost dnsmasq-dhcp[313325]: read /var/lib/neutron/dhcp/6273e462-83ab-41be-b934-cf2bff8ed29f/opts Dec 5 05:08:47 localhost nova_compute[280228]: 2025-12-05 10:08:47.155 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:47 localhost kernel: device tape73f56f1-81 left promiscuous mode Dec 5 05:08:47 localhost nova_compute[280228]: 2025-12-05 10:08:47.170 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:47 localhost ovn_controller[153000]: 2025-12-05T10:08:47Z|00209|binding|INFO|Releasing lport e73f56f1-819e-4bd8-b0f0-c76d6f6671cb from this chassis (sb_readonly=0) Dec 5 05:08:47 localhost ovn_controller[153000]: 2025-12-05T10:08:47Z|00210|binding|INFO|Setting lport e73f56f1-819e-4bd8-b0f0-c76d6f6671cb down in Southbound Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.184 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-6273e462-83ab-41be-b934-cf2bff8ed29f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6273e462-83ab-41be-b934-cf2bff8ed29f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e5c1d3be695d4b14816e6ea97315cd6f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e58439ee-eccd-47f0-8ace-17ae10dd8289, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e73f56f1-819e-4bd8-b0f0-c76d6f6671cb) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.185 158820 INFO neutron.agent.ovn.metadata.agent [-] Port e73f56f1-819e-4bd8-b0f0-c76d6f6671cb in datapath 6273e462-83ab-41be-b934-cf2bff8ed29f unbound from our chassis#033[00m Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.188 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6273e462-83ab-41be-b934-cf2bff8ed29f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.189 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[9e074119-001c-4156-ba5b-e6e4ae34cf10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:08:47 localhost nova_compute[280228]: 2025-12-05 10:08:47.193 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s Dec 5 05:08:47 localhost dnsmasq[313559]: exiting on receipt of SIGTERM Dec 5 05:08:47 localhost podman[314006]: 2025-12-05 10:08:47.760927031 +0000 UTC m=+0.049923488 container kill 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:47 localhost systemd[1]: tmp-crun.XNSEj2.mount: Deactivated successfully. Dec 5 05:08:47 localhost systemd[1]: libpod-28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944.scope: Deactivated successfully. Dec 5 05:08:47 localhost podman[314020]: 2025-12-05 10:08:47.84298179 +0000 UTC m=+0.061803241 container died 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:47 localhost podman[314020]: 2025-12-05 10:08:47.875244068 +0000 UTC m=+0.094065489 container cleanup 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:08:47 localhost systemd[1]: libpod-conmon-28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944.scope: Deactivated successfully. Dec 5 05:08:47 localhost systemd[1]: var-lib-containers-storage-overlay-5bcdf80542fcf1c81fd0e50b65070739eaeaca537ded62682a3de03cba1906e3-merged.mount: Deactivated successfully. Dec 5 05:08:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944-userdata-shm.mount: Deactivated successfully. Dec 5 05:08:47 localhost podman[314021]: 2025-12-05 10:08:47.929148146 +0000 UTC m=+0.142132607 container remove 28594d5a163d688695de865c11537f2bc4e5012a774fc54bbc6087e3fb832944 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-781f4cf1-2e11-488d-ab63-fac6b9c67b6c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:08:47 localhost ovn_controller[153000]: 2025-12-05T10:08:47Z|00211|binding|INFO|Releasing lport 068e8d44-efdb-4aca-87ca-3b5e6009fd59 from this chassis (sb_readonly=0) Dec 5 05:08:47 localhost kernel: device tap068e8d44-ef left promiscuous mode Dec 5 05:08:47 localhost nova_compute[280228]: 2025-12-05 10:08:47.942 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:47 localhost ovn_controller[153000]: 2025-12-05T10:08:47Z|00212|binding|INFO|Setting lport 068e8d44-efdb-4aca-87ca-3b5e6009fd59 down in Southbound Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.951 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-781f4cf1-2e11-488d-ab63-fac6b9c67b6c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-781f4cf1-2e11-488d-ab63-fac6b9c67b6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8da57e2736240a0ac7055e85adea6da', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7632c897-a912-4427-a8fb-325ad3ea4a25, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=068e8d44-efdb-4aca-87ca-3b5e6009fd59) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.953 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 068e8d44-efdb-4aca-87ca-3b5e6009fd59 in datapath 781f4cf1-2e11-488d-ab63-fac6b9c67b6c unbound from our chassis#033[00m Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.956 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 781f4cf1-2e11-488d-ab63-fac6b9c67b6c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:08:47 localhost ovn_metadata_agent[158815]: 2025-12-05 10:08:47.957 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d5bfbe-ce58-4a65-9b6d-1175717efa56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:08:47 localhost nova_compute[280228]: 2025-12-05 10:08:47.962 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:48 localhost systemd[1]: run-netns-qdhcp\x2d781f4cf1\x2d2e11\x2d488d\x2dab63\x2dfac6b9c67b6c.mount: Deactivated successfully. Dec 5 05:08:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:48.249 261902 INFO neutron.agent.dhcp.agent [None req-7d151300-4c19-4e80-a331-93fcf1306c8d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:48.289 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:49.071 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:08:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:08:49 localhost podman[314049]: 2025-12-05 10:08:49.202944789 +0000 UTC m=+0.083409293 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:08:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:49 localhost nova_compute[280228]: 2025-12-05 10:08:49.244 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:49 localhost podman[314049]: 2025-12-05 10:08:49.248687047 +0000 UTC m=+0.129151571 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:08:49 localhost podman[314050]: 2025-12-05 10:08:49.259174148 +0000 UTC m=+0.135534116 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:08:49 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:08:49 localhost podman[314050]: 2025-12-05 10:08:49.273683022 +0000 UTC m=+0.150043000 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:08:49 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:08:49 localhost ovn_controller[153000]: 2025-12-05T10:08:49Z|00213|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:49 localhost nova_compute[280228]: 2025-12-05 10:08:49.740 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:49 localhost podman[239519]: time="2025-12-05T10:08:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:08:49 localhost podman[239519]: @ - - [05/Dec/2025:10:08:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157935 "" "Go-http-client/1.1" Dec 5 05:08:49 localhost podman[239519]: @ - - [05/Dec/2025:10:08:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19726 "" "Go-http-client/1.1" Dec 5 05:08:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:08:50.097 2 INFO neutron.agent.securitygroups_rpc [None req-c03c3fa4-9507-44bb-bcaa-801354fb2705 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:08:50 localhost ovn_controller[153000]: 2025-12-05T10:08:50Z|00214|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:08:50 localhost nova_compute[280228]: 2025-12-05 10:08:50.321 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:52 localhost nova_compute[280228]: 2025-12-05 10:08:52.159 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:52 localhost dnsmasq[313325]: exiting on receipt of SIGTERM Dec 5 05:08:52 localhost podman[314114]: 2025-12-05 10:08:52.803934281 +0000 UTC m=+0.060484300 container kill 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:08:52 localhost systemd[1]: libpod-3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117.scope: Deactivated successfully. Dec 5 05:08:52 localhost podman[314127]: 2025-12-05 10:08:52.875066797 +0000 UTC m=+0.053754095 container died 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:08:52 localhost podman[314127]: 2025-12-05 10:08:52.910016246 +0000 UTC m=+0.088703514 container cleanup 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:08:52 localhost systemd[1]: libpod-conmon-3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117.scope: Deactivated successfully. Dec 5 05:08:52 localhost podman[314128]: 2025-12-05 10:08:52.965385 +0000 UTC m=+0.137594470 container remove 3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6273e462-83ab-41be-b934-cf2bff8ed29f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:08:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:53 localhost systemd[1]: var-lib-containers-storage-overlay-ba37949dcf31e57446022d7b167f1ef6fa1445f99fa854fd0b815c9220bd60a0-merged.mount: Deactivated successfully. Dec 5 05:08:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a2d69b15f6414361b63b28bd071ca7a0d8bb0b46df82955518044c2ac2db117-userdata-shm.mount: Deactivated successfully. Dec 5 05:08:53 localhost systemd[1]: run-netns-qdhcp\x2d6273e462\x2d83ab\x2d41be\x2db934\x2dcf2bff8ed29f.mount: Deactivated successfully. Dec 5 05:08:53 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:53.880 261902 INFO neutron.agent.dhcp.agent [None req-9f2cfc2b-b79a-4aba-8ffc-68f26f8e89a1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:53 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:53.881 261902 INFO neutron.agent.dhcp.agent [None req-9f2cfc2b-b79a-4aba-8ffc-68f26f8e89a1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:54 localhost nova_compute[280228]: 2025-12-05 10:08:54.245 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:08:54.701 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:08:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:08:57 localhost nova_compute[280228]: 2025-12-05 10:08:57.162 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:08:57 localhost openstack_network_exporter[241668]: ERROR 10:08:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:08:57 localhost openstack_network_exporter[241668]: ERROR 10:08:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:08:57 localhost openstack_network_exporter[241668]: ERROR 10:08:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:08:57 localhost openstack_network_exporter[241668]: ERROR 10:08:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:08:57 localhost openstack_network_exporter[241668]: Dec 5 05:08:57 localhost openstack_network_exporter[241668]: ERROR 10:08:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:08:57 localhost openstack_network_exporter[241668]: Dec 5 05:08:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:08:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:08:58 localhost systemd[1]: tmp-crun.8oSCGa.mount: Deactivated successfully. Dec 5 05:08:58 localhost podman[314159]: 2025-12-05 10:08:58.208485199 +0000 UTC m=+0.088734555 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:08:58 localhost podman[314158]: 2025-12-05 10:08:58.253899448 +0000 UTC m=+0.139402015 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 05:08:58 localhost podman[314159]: 2025-12-05 10:08:58.278766199 +0000 UTC m=+0.159015565 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:08:58 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:08:58 localhost podman[314158]: 2025-12-05 10:08:58.343493168 +0000 UTC m=+0.228995745 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 5 05:08:58 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:08:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:08:59 localhost nova_compute[280228]: 2025-12-05 10:08:59.283 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:00 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:00.606 2 INFO neutron.agent.securitygroups_rpc [None req-bbd7d412-ea7d-41f8-b523-09fdeefd5b0e 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:09:01 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:01.310 261902 INFO neutron.agent.linux.ip_lib [None req-bf74772c-975e-4c0b-8ad5-3a2a4055401c - - - - - -] Device tap8156c2e5-a2 cannot be used as it has no MAC address#033[00m Dec 5 05:09:01 localhost nova_compute[280228]: 2025-12-05 10:09:01.330 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:01 localhost kernel: device tap8156c2e5-a2 entered promiscuous mode Dec 5 05:09:01 localhost ovn_controller[153000]: 2025-12-05T10:09:01Z|00215|binding|INFO|Claiming lport 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd for this chassis. Dec 5 05:09:01 localhost ovn_controller[153000]: 2025-12-05T10:09:01Z|00216|binding|INFO|8156c2e5-a2ed-4d68-bed8-b38c385e9dfd: Claiming unknown Dec 5 05:09:01 localhost nova_compute[280228]: 2025-12-05 10:09:01.340 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:01 localhost NetworkManager[5960]: [1764929341.3429] manager: (tap8156c2e5-a2): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Dec 5 05:09:01 localhost systemd-udevd[314205]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:09:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:01.349 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-d45c9473-9c2c-4ccf-b293-35b702b04534', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d45c9473-9c2c-4ccf-b293-35b702b04534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9911350e2d5148098ee9d947cc452035', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be692bab-9874-45ad-963d-04d5592ea36c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8156c2e5-a2ed-4d68-bed8-b38c385e9dfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:01.350 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd in datapath d45c9473-9c2c-4ccf-b293-35b702b04534 bound to our chassis#033[00m Dec 5 05:09:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:01.353 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port a5672dd9-6db4-4e68-9d73-8f9549094d33 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:09:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:01.354 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d45c9473-9c2c-4ccf-b293-35b702b04534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:01.356 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7cef4dd5-0655-43d9-88da-306e55eb1945]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost ovn_controller[153000]: 2025-12-05T10:09:01Z|00217|binding|INFO|Setting lport 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd ovn-installed in OVS Dec 5 05:09:01 localhost ovn_controller[153000]: 2025-12-05T10:09:01Z|00218|binding|INFO|Setting lport 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd up in Southbound Dec 5 05:09:01 localhost nova_compute[280228]: 2025-12-05 10:09:01.383 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost journal[228791]: ethtool ioctl error on tap8156c2e5-a2: No such device Dec 5 05:09:01 localhost nova_compute[280228]: 2025-12-05 10:09:01.419 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:01 localhost nova_compute[280228]: 2025-12-05 10:09:01.443 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:02 localhost nova_compute[280228]: 2025-12-05 10:09:02.200 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:02 localhost podman[314344]: Dec 5 05:09:02 localhost podman[314344]: 2025-12-05 10:09:02.297808607 +0000 UTC m=+0.077670136 container create f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:09:02 localhost podman[314344]: 2025-12-05 10:09:02.25509379 +0000 UTC m=+0.034955349 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:02 localhost systemd[1]: Started libpod-conmon-f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76.scope. Dec 5 05:09:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:09:02 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:09:02 localhost systemd[1]: Started libcrun container. Dec 5 05:09:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:09:02 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:09:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:09:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a90e00473d50109359ec191b6f08683b39432de4078efebb1635b0196576783b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:02 localhost podman[314344]: 2025-12-05 10:09:02.394192895 +0000 UTC m=+0.174054424 container init f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:09:02 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:09:02 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 936f3484-99e9-4daf-b3e8-b17dc9d730a4 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:09:02 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 936f3484-99e9-4daf-b3e8-b17dc9d730a4 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:09:02 localhost ceph-mgr[286454]: [progress INFO root] Completed event 936f3484-99e9-4daf-b3e8-b17dc9d730a4 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:09:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:09:02 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:09:02 localhost podman[314344]: 2025-12-05 10:09:02.407971367 +0000 UTC m=+0.187832886 container start f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 5 05:09:02 localhost dnsmasq[314362]: started, version 2.85 cachesize 150 Dec 5 05:09:02 localhost dnsmasq[314362]: DNS service limited to local subnets Dec 5 05:09:02 localhost dnsmasq[314362]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:02 localhost dnsmasq[314362]: warning: no upstream servers configured Dec 5 05:09:02 localhost dnsmasq-dhcp[314362]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:09:02 localhost dnsmasq[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/addn_hosts - 0 addresses Dec 5 05:09:02 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/host Dec 5 05:09:02 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/opts Dec 5 05:09:02 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:02.614 261902 INFO neutron.agent.dhcp.agent [None req-7d3a35fc-b40d-49e4-a4d6-cc86e16514d9 - - - - - -] DHCP configuration for ports {'42eeca5a-1fd6-424e-88a8-bfe6f0b80c79'} is completed#033[00m Dec 5 05:09:02 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:09:02 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:09:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:09:03 localhost systemd[1]: tmp-crun.MfFTMU.mount: Deactivated successfully. Dec 5 05:09:03 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:03.699 2 INFO neutron.agent.securitygroups_rpc [None req-ffa88cd4-0de4-4dfc-9628-3fa8bcfe8ab2 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:03.915 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:09:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:03.916 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:09:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:03.917 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:09:04 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:04.082 2 INFO neutron.agent.securitygroups_rpc [None req-5cd34cc1-c456-444f-81be-b85c66f6c284 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:04 localhost nova_compute[280228]: 2025-12-05 10:09:04.286 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:04 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:04.689 2 INFO neutron.agent.securitygroups_rpc [None req-282fa99c-63a9-4315-814e-64b88518f264 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:04 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:04.733 2 INFO neutron.agent.securitygroups_rpc [None req-5cd34cc1-c456-444f-81be-b85c66f6c284 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:09:05 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:09:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:09:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:09:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:05 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:05.787 2 INFO neutron.agent.securitygroups_rpc [None req-0567dbfd-40f6-475b-b2e7-71873c340d0d 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:06 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:09:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:06.847 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:06Z, description=, device_id=17d375d6-8b85-43f2-b60a-14b77ff1b02e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=edc99e3b-cfc5-40aa-ab6b-fcacff240a1f, ip_allocation=immediate, mac_address=fa:16:3e:45:76:79, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:54Z, description=, dns_domain=, id=d45c9473-9c2c-4ccf-b293-35b702b04534, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-460751711, port_security_enabled=True, project_id=9911350e2d5148098ee9d947cc452035, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30190, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1623, status=ACTIVE, subnets=['493e06e1-4d0e-42f1-a211-5f8b761d79c9'], tags=[], tenant_id=9911350e2d5148098ee9d947cc452035, updated_at=2025-12-05T10:08:58Z, vlan_transparent=None, network_id=d45c9473-9c2c-4ccf-b293-35b702b04534, port_security_enabled=False, project_id=9911350e2d5148098ee9d947cc452035, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1691, status=DOWN, tags=[], tenant_id=9911350e2d5148098ee9d947cc452035, updated_at=2025-12-05T10:09:06Z on network d45c9473-9c2c-4ccf-b293-35b702b04534#033[00m Dec 5 05:09:07 localhost podman[314399]: 2025-12-05 10:09:07.105307455 +0000 UTC m=+0.063419051 container kill f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:09:07 localhost dnsmasq[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/addn_hosts - 1 addresses Dec 5 05:09:07 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/host Dec 5 05:09:07 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/opts Dec 5 05:09:07 localhost nova_compute[280228]: 2025-12-05 10:09:07.203 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:09:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:07.402 261902 INFO neutron.agent.dhcp.agent [None req-c0577c2b-ed54-480a-8bb5-c25cf127b24f - - - - - -] DHCP configuration for ports {'edc99e3b-cfc5-40aa-ab6b-fcacff240a1f'} is completed#033[00m Dec 5 05:09:07 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:07.790 2 INFO neutron.agent.securitygroups_rpc [None req-0b1ad4f5-35a0-4b3a-82f6-351d119f03fa 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:08 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:08.170 2 INFO neutron.agent.securitygroups_rpc [None req-602f23e4-2e9f-45fa-b722-3e4dfab19d2f 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e130 do_prune osdmap full prune enabled Dec 5 05:09:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e131 e131: 6 total, 6 up, 6 in Dec 5 05:09:08 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e131: 6 total, 6 up, 6 in Dec 5 05:09:08 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:08.447 2 INFO neutron.agent.securitygroups_rpc [None req-2d23d8bb-26bf-4cf0-b76a-8223dd2d3e9e b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['4a9e1e3d-9deb-4500-8cd3-c46454c40952']#033[00m Dec 5 05:09:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:09.190 2 INFO neutron.agent.securitygroups_rpc [None req-b4988939-0efe-4cdb-87b1-b975b62ce81e b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['4a9e1e3d-9deb-4500-8cd3-c46454c40952']#033[00m Dec 5 05:09:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Dec 5 05:09:09 localhost nova_compute[280228]: 2025-12-05 10:09:09.289 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e131 do_prune osdmap full prune enabled Dec 5 05:09:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e132 e132: 6 total, 6 up, 6 in Dec 5 05:09:10 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e132: 6 total, 6 up, 6 in Dec 5 05:09:10 localhost nova_compute[280228]: 2025-12-05 10:09:10.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:10 localhost nova_compute[280228]: 2025-12-05 10:09:10.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 05:09:10 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:10.565 2 INFO neutron.agent.securitygroups_rpc [None req-f720baac-31b8-404b-9e10-284a5cc83368 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 43 op/s Dec 5 05:09:11 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:11.399 2 INFO neutron.agent.securitygroups_rpc [None req-07455aeb-b972-44df-83ba-a97d019f246c b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:11.424 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:06Z, description=, device_id=17d375d6-8b85-43f2-b60a-14b77ff1b02e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=edc99e3b-cfc5-40aa-ab6b-fcacff240a1f, ip_allocation=immediate, mac_address=fa:16:3e:45:76:79, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:54Z, description=, dns_domain=, id=d45c9473-9c2c-4ccf-b293-35b702b04534, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-460751711, port_security_enabled=True, project_id=9911350e2d5148098ee9d947cc452035, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30190, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1623, status=ACTIVE, subnets=['493e06e1-4d0e-42f1-a211-5f8b761d79c9'], tags=[], tenant_id=9911350e2d5148098ee9d947cc452035, updated_at=2025-12-05T10:08:58Z, vlan_transparent=None, network_id=d45c9473-9c2c-4ccf-b293-35b702b04534, port_security_enabled=False, project_id=9911350e2d5148098ee9d947cc452035, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1691, status=DOWN, tags=[], tenant_id=9911350e2d5148098ee9d947cc452035, updated_at=2025-12-05T10:09:06Z on network d45c9473-9c2c-4ccf-b293-35b702b04534#033[00m Dec 5 05:09:11 localhost dnsmasq[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/addn_hosts - 1 addresses Dec 5 05:09:11 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/host Dec 5 05:09:11 localhost podman[314438]: 2025-12-05 10:09:11.67218583 +0000 UTC m=+0.071453147 container kill f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:11 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/opts Dec 5 05:09:11 localhost systemd[1]: tmp-crun.R4MFP6.mount: Deactivated successfully. Dec 5 05:09:11 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:11.745 2 INFO neutron.agent.securitygroups_rpc [None req-70979621-f888-4de2-82c4-1edb603fa26e b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:11.933 261902 INFO neutron.agent.linux.ip_lib [None req-b282fb2f-f8b1-4bcd-becc-bb5f07d619dc - - - - - -] Device tap61a9f09b-e0 cannot be used as it has no MAC address#033[00m Dec 5 05:09:11 localhost nova_compute[280228]: 2025-12-05 10:09:11.965 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:11 localhost kernel: device tap61a9f09b-e0 entered promiscuous mode Dec 5 05:09:11 localhost NetworkManager[5960]: [1764929351.9743] manager: (tap61a9f09b-e0): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Dec 5 05:09:11 localhost nova_compute[280228]: 2025-12-05 10:09:11.978 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:11 localhost ovn_controller[153000]: 2025-12-05T10:09:11Z|00219|binding|INFO|Claiming lport 61a9f09b-e06b-4f36-94a8-78f2927a92da for this chassis. Dec 5 05:09:11 localhost ovn_controller[153000]: 2025-12-05T10:09:11Z|00220|binding|INFO|61a9f09b-e06b-4f36-94a8-78f2927a92da: Claiming unknown Dec 5 05:09:11 localhost systemd-udevd[314468]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost ovn_controller[153000]: 2025-12-05T10:09:12Z|00221|binding|INFO|Setting lport 61a9f09b-e06b-4f36-94a8-78f2927a92da ovn-installed in OVS Dec 5 05:09:12 localhost nova_compute[280228]: 2025-12-05 10:09:12.013 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost journal[228791]: ethtool ioctl error on tap61a9f09b-e0: No such device Dec 5 05:09:12 localhost nova_compute[280228]: 2025-12-05 10:09:12.063 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:12 localhost nova_compute[280228]: 2025-12-05 10:09:12.092 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:12 localhost ovn_controller[153000]: 2025-12-05T10:09:12Z|00222|binding|INFO|Setting lport 61a9f09b-e06b-4f36-94a8-78f2927a92da up in Southbound Dec 5 05:09:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:12.119 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b32052a9-ab3a-4b7b-a7e7-65b5751ed816', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b32052a9-ab3a-4b7b-a7e7-65b5751ed816', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b042ca58df6348e1a29311c5a517d4d4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b172b7-ec76-4399-b3ea-1135a9577535, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=61a9f09b-e06b-4f36-94a8-78f2927a92da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:12.120 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 61a9f09b-e06b-4f36-94a8-78f2927a92da in datapath b32052a9-ab3a-4b7b-a7e7-65b5751ed816 bound to our chassis#033[00m Dec 5 05:09:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:12.122 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b32052a9-ab3a-4b7b-a7e7-65b5751ed816 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:09:12 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:12.123 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0c853c-99a2-4892-81db-488ef4a8ee80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:12 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:12.149 261902 INFO neutron.agent.dhcp.agent [None req-179c075e-9072-4547-9cde-57e329a4a652 - - - - - -] DHCP configuration for ports {'edc99e3b-cfc5-40aa-ab6b-fcacff240a1f'} is completed#033[00m Dec 5 05:09:12 localhost nova_compute[280228]: 2025-12-05 10:09:12.206 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e132 do_prune osdmap full prune enabled Dec 5 05:09:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e133 e133: 6 total, 6 up, 6 in Dec 5 05:09:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e133: 6 total, 6 up, 6 in Dec 5 05:09:12 localhost podman[314539]: Dec 5 05:09:12 localhost podman[314539]: 2025-12-05 10:09:12.978171286 +0000 UTC m=+0.092215912 container create 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:09:13 localhost systemd[1]: Started libpod-conmon-806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020.scope. Dec 5 05:09:13 localhost podman[314539]: 2025-12-05 10:09:12.934739797 +0000 UTC m=+0.048784493 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:13 localhost systemd[1]: Started libcrun container. Dec 5 05:09:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26e0bfdeed23f416e507af3db01a176da8801a38027d0e18d6f51daf3b37ad3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:13 localhost podman[314539]: 2025-12-05 10:09:13.062200376 +0000 UTC m=+0.176245012 container init 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:09:13 localhost podman[314539]: 2025-12-05 10:09:13.074130061 +0000 UTC m=+0.188174697 container start 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:13 localhost dnsmasq[314581]: started, version 2.85 cachesize 150 Dec 5 05:09:13 localhost dnsmasq[314581]: DNS service limited to local subnets Dec 5 05:09:13 localhost dnsmasq[314581]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:13 localhost dnsmasq[314581]: warning: no upstream servers configured Dec 5 05:09:13 localhost dnsmasq-dhcp[314581]: DHCP, static leases only on 10.101.0.0, lease time 1d Dec 5 05:09:13 localhost dnsmasq[314581]: read /var/lib/neutron/dhcp/b32052a9-ab3a-4b7b-a7e7-65b5751ed816/addn_hosts - 0 addresses Dec 5 05:09:13 localhost dnsmasq-dhcp[314581]: read /var/lib/neutron/dhcp/b32052a9-ab3a-4b7b-a7e7-65b5751ed816/host Dec 5 05:09:13 localhost dnsmasq-dhcp[314581]: read /var/lib/neutron/dhcp/b32052a9-ab3a-4b7b-a7e7-65b5751ed816/opts Dec 5 05:09:13 localhost podman[314555]: 2025-12-05 10:09:13.121393227 +0000 UTC m=+0.089334274 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Dec 5 05:09:13 localhost podman[314555]: 2025-12-05 10:09:13.128599088 +0000 UTC m=+0.096540195 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3) Dec 5 05:09:13 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:09:13 localhost podman[314553]: 2025-12-05 10:09:13.179632118 +0000 UTC m=+0.151853375 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:09:13 localhost podman[314553]: 2025-12-05 10:09:13.210308667 +0000 UTC m=+0.182529934 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:09:13 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:09:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 5.5 KiB/s wr, 90 op/s Dec 5 05:09:13 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:13.248 261902 INFO neutron.agent.dhcp.agent [None req-de7a8fe0-ad26-4650-9ec6-b1bf68c8d97d - - - - - -] DHCP configuration for ports {'a92d7fd4-70ae-4dd4-8c03-88b3bf0a9545'} is completed#033[00m Dec 5 05:09:13 localhost podman[314554]: 2025-12-05 10:09:13.259292554 +0000 UTC m=+0.231848992 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:13 localhost podman[314554]: 2025-12-05 10:09:13.264805053 +0000 UTC m=+0.237361421 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:13 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:09:13 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:13.404 2 INFO neutron.agent.securitygroups_rpc [None req-c4f876c6-2e34-4bd7-ac97-65e12853c2c6 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:13 localhost nova_compute[280228]: 2025-12-05 10:09:13.578 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:13 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:13.808 2 INFO neutron.agent.securitygroups_rpc [None req-5baf0550-700d-4270-8f26-698d0c93f8dd 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:09:13 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:13.863 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:13Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=851fad6d-99ca-432b-9505-ec8b3be219de, ip_allocation=immediate, mac_address=fa:16:3e:78:71:41, name=tempest-FloatingIPTestJSON-1002428822, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:08:54Z, description=, dns_domain=, id=d45c9473-9c2c-4ccf-b293-35b702b04534, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-460751711, port_security_enabled=True, project_id=9911350e2d5148098ee9d947cc452035, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30190, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1623, status=ACTIVE, subnets=['493e06e1-4d0e-42f1-a211-5f8b761d79c9'], tags=[], tenant_id=9911350e2d5148098ee9d947cc452035, updated_at=2025-12-05T10:08:58Z, vlan_transparent=None, network_id=d45c9473-9c2c-4ccf-b293-35b702b04534, port_security_enabled=True, project_id=9911350e2d5148098ee9d947cc452035, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['69a903c5-0a73-4330-a64c-53b150f2fa02'], standard_attr_id=1715, status=DOWN, tags=[], tenant_id=9911350e2d5148098ee9d947cc452035, updated_at=2025-12-05T10:09:13Z on network d45c9473-9c2c-4ccf-b293-35b702b04534#033[00m Dec 5 05:09:14 localhost podman[314634]: 2025-12-05 10:09:14.114577795 +0000 UTC m=+0.070985093 container kill f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:09:14 localhost dnsmasq[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/addn_hosts - 2 addresses Dec 5 05:09:14 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/host Dec 5 05:09:14 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/opts Dec 5 05:09:14 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:14.135 2 INFO neutron.agent.securitygroups_rpc [None req-dcb4f2a3-25e0-41ca-809b-3ef76d8b07ac b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:14 localhost nova_compute[280228]: 2025-12-05 10:09:14.292 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:14 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:14.441 261902 INFO neutron.agent.dhcp.agent [None req-13175a26-d212-4997-8e7d-32c9bb1d9e47 - - - - - -] DHCP configuration for ports {'851fad6d-99ca-432b-9505-ec8b3be219de'} is completed#033[00m Dec 5 05:09:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e133 do_prune osdmap full prune enabled Dec 5 05:09:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e134 e134: 6 total, 6 up, 6 in Dec 5 05:09:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e134: 6 total, 6 up, 6 in Dec 5 05:09:15 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:15.052 2 INFO neutron.agent.securitygroups_rpc [None req-e013afb9-bd7a-41a0-8c1d-6822c775054e 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:09:15 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:15.086 2 INFO neutron.agent.securitygroups_rpc [None req-b2db1434-9482-4b8c-b5b8-dd323cecd7fe 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:09:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:09:15 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:15.162 2 INFO neutron.agent.securitygroups_rpc [None req-2f7535df-8614-4bd1-9847-9f7ca31afb50 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:09:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:09:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:09:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:09:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 5.5 KiB/s wr, 90 op/s Dec 5 05:09:15 localhost dnsmasq[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/addn_hosts - 1 addresses Dec 5 05:09:15 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/host Dec 5 05:09:15 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/opts Dec 5 05:09:15 localhost podman[314672]: 2025-12-05 10:09:15.292327079 +0000 UTC m=+0.056110627 container kill f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:09:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:15.559 261902 INFO neutron.agent.linux.ip_lib [None req-5cc98d92-70f2-4dca-82ec-6d191809eaa4 - - - - - -] Device tapfa0f3a2a-a0 cannot be used as it has no MAC address#033[00m Dec 5 05:09:15 localhost nova_compute[280228]: 2025-12-05 10:09:15.586 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:15 localhost kernel: device tapfa0f3a2a-a0 entered promiscuous mode Dec 5 05:09:15 localhost NetworkManager[5960]: [1764929355.5942] manager: (tapfa0f3a2a-a0): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Dec 5 05:09:15 localhost ovn_controller[153000]: 2025-12-05T10:09:15Z|00223|binding|INFO|Claiming lport fa0f3a2a-a04e-42cf-bb33-9ee828a83808 for this chassis. Dec 5 05:09:15 localhost ovn_controller[153000]: 2025-12-05T10:09:15Z|00224|binding|INFO|fa0f3a2a-a04e-42cf-bb33-9ee828a83808: Claiming unknown Dec 5 05:09:15 localhost nova_compute[280228]: 2025-12-05 10:09:15.596 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:15 localhost systemd-udevd[314703]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:09:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:15.608 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-f56f9804-e9c1-4c76-a615-7cc8c95c5866', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f56f9804-e9c1-4c76-a615-7cc8c95c5866', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4da81250-bc43-41b7-a309-4168568a6167, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fa0f3a2a-a04e-42cf-bb33-9ee828a83808) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:15.610 158820 INFO neutron.agent.ovn.metadata.agent [-] Port fa0f3a2a-a04e-42cf-bb33-9ee828a83808 in datapath f56f9804-e9c1-4c76-a615-7cc8c95c5866 bound to our chassis#033[00m Dec 5 05:09:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:15.612 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f56f9804-e9c1-4c76-a615-7cc8c95c5866 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:09:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:15.613 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[c265c007-f492-41b6-864d-2035d461ecea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:15.633 2 INFO neutron.agent.securitygroups_rpc [None req-77179fa3-d33d-4e1c-8e3f-5a8b2508ceb3 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:15 localhost ovn_controller[153000]: 2025-12-05T10:09:15Z|00225|binding|INFO|Setting lport fa0f3a2a-a04e-42cf-bb33-9ee828a83808 ovn-installed in OVS Dec 5 05:09:15 localhost ovn_controller[153000]: 2025-12-05T10:09:15Z|00226|binding|INFO|Setting lport fa0f3a2a-a04e-42cf-bb33-9ee828a83808 up in Southbound Dec 5 05:09:15 localhost nova_compute[280228]: 2025-12-05 10:09:15.636 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost journal[228791]: ethtool ioctl error on tapfa0f3a2a-a0: No such device Dec 5 05:09:15 localhost nova_compute[280228]: 2025-12-05 10:09:15.677 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:15 localhost nova_compute[280228]: 2025-12-05 10:09:15.714 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:16.335 2 INFO neutron.agent.securitygroups_rpc [None req-6f1bd688-2005-49d6-a3a1-1fee2f80d98c 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.504 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:16.506 2 INFO neutron.agent.securitygroups_rpc [None req-ead10424-0bad-48ec-bd99-83fac4a5ddf3 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.531 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.556 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.556 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.557 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.557 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.557 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:09:16 localhost dnsmasq[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/addn_hosts - 0 addresses Dec 5 05:09:16 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/host Dec 5 05:09:16 localhost podman[314769]: 2025-12-05 10:09:16.615703916 +0000 UTC m=+0.058394556 container kill f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:16 localhost dnsmasq-dhcp[314362]: read /var/lib/neutron/dhcp/d45c9473-9c2c-4ccf-b293-35b702b04534/opts Dec 5 05:09:16 localhost ovn_controller[153000]: 2025-12-05T10:09:16Z|00227|binding|INFO|Releasing lport 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd from this chassis (sb_readonly=0) Dec 5 05:09:16 localhost kernel: device tap8156c2e5-a2 left promiscuous mode Dec 5 05:09:16 localhost ovn_controller[153000]: 2025-12-05T10:09:16Z|00228|binding|INFO|Setting lport 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd down in Southbound Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.808 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:16 localhost podman[314830]: Dec 5 05:09:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:16.814 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-d45c9473-9c2c-4ccf-b293-35b702b04534', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d45c9473-9c2c-4ccf-b293-35b702b04534', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9911350e2d5148098ee9d947cc452035', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be692bab-9874-45ad-963d-04d5592ea36c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8156c2e5-a2ed-4d68-bed8-b38c385e9dfd) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:16.815 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 8156c2e5-a2ed-4d68-bed8-b38c385e9dfd in datapath d45c9473-9c2c-4ccf-b293-35b702b04534 unbound from our chassis#033[00m Dec 5 05:09:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:16.817 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d45c9473-9c2c-4ccf-b293-35b702b04534, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:16.818 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[1d51c608-2932-4dfc-8c61-6d48c4a71597]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:16 localhost podman[314830]: 2025-12-05 10:09:16.818751587 +0000 UTC m=+0.070472486 container create 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:09:16 localhost nova_compute[280228]: 2025-12-05 10:09:16.846 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:16 localhost systemd[1]: Started libpod-conmon-0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266.scope. Dec 5 05:09:16 localhost systemd[1]: Started libcrun container. Dec 5 05:09:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6f0c18deed076cb374cbb7b5b6439fb5304187a158e4a2b62f8fb52dad74bab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:16 localhost podman[314830]: 2025-12-05 10:09:16.785338055 +0000 UTC m=+0.037058944 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:16 localhost podman[314830]: 2025-12-05 10:09:16.891680818 +0000 UTC m=+0.143401717 container init 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:16 localhost podman[314830]: 2025-12-05 10:09:16.906375987 +0000 UTC m=+0.158096886 container start 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:09:16 localhost dnsmasq[314850]: started, version 2.85 cachesize 150 Dec 5 05:09:16 localhost dnsmasq[314850]: DNS service limited to local subnets Dec 5 05:09:16 localhost dnsmasq[314850]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:16 localhost dnsmasq[314850]: warning: no upstream servers configured Dec 5 05:09:16 localhost dnsmasq-dhcp[314850]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:09:16 localhost dnsmasq[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/addn_hosts - 0 addresses Dec 5 05:09:16 localhost dnsmasq-dhcp[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/host Dec 5 05:09:16 localhost dnsmasq-dhcp[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/opts Dec 5 05:09:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:09:16 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1030265780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.000 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.072 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.073 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:09:17 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:17.109 261902 INFO neutron.agent.dhcp.agent [None req-5de3347c-57b8-4296-806a-edfc8e70a830 - - - - - -] DHCP configuration for ports {'d98c4c18-9403-49b8-b49d-8c9395466c73'} is completed#033[00m Dec 5 05:09:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 152 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 300 KiB/s wr, 185 op/s Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.257 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.330 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.331 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11195MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.332 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.332 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:09:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:17.552 2 INFO neutron.agent.securitygroups_rpc [None req-80d3fbb2-b4b3-4e6d-9a94-ccedba8b6568 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.665 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.665 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:09:17 localhost nova_compute[280228]: 2025-12-05 10:09:17.665 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.195 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.258 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.259 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.274 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.296 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.327 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:09:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:09:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4231959628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.824 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.831 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.851 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.854 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:09:18 localhost nova_compute[280228]: 2025-12-05 10:09:18.854 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:09:18 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:18.974 2 INFO neutron.agent.securitygroups_rpc [None req-9a9508f4-ec27-42b4-bc60-575687ed92dd b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 152 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 651 KiB/s rd, 254 KiB/s wr, 115 op/s Dec 5 05:09:19 localhost nova_compute[280228]: 2025-12-05 10:09:19.297 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:19 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:19.472 2 INFO neutron.agent.securitygroups_rpc [None req-13594d5e-1730-4625-a1f2-65b4d178b28d 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:19 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:19.566 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:18Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7c3dc46d-7f13-4987-9b51-4425d1d4a2e9, ip_allocation=immediate, mac_address=fa:16:3e:55:3f:b0, name=tempest-PortsTestJSON-4477727, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:10Z, description=, dns_domain=, id=f56f9804-e9c1-4c76-a615-7cc8c95c5866, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1927919301, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38198, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1706, status=ACTIVE, subnets=['689dc007-7230-473c-a6f4-8f8582ba3841'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:14Z, vlan_transparent=None, network_id=f56f9804-e9c1-4c76-a615-7cc8c95c5866, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=1728, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:18Z on network f56f9804-e9c1-4c76-a615-7cc8c95c5866#033[00m Dec 5 05:09:19 localhost dnsmasq[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/addn_hosts - 1 addresses Dec 5 05:09:19 localhost dnsmasq-dhcp[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/host Dec 5 05:09:19 localhost podman[314892]: 2025-12-05 10:09:19.805897744 +0000 UTC m=+0.081052650 container kill 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 5 05:09:19 localhost dnsmasq-dhcp[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/opts Dec 5 05:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:09:19 localhost podman[239519]: time="2025-12-05T10:09:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:09:19 localhost dnsmasq[314362]: exiting on receipt of SIGTERM Dec 5 05:09:19 localhost systemd[1]: tmp-crun.Q3GKL0.mount: Deactivated successfully. Dec 5 05:09:19 localhost systemd[1]: libpod-f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76.scope: Deactivated successfully. Dec 5 05:09:19 localhost podman[314922]: 2025-12-05 10:09:19.904547231 +0000 UTC m=+0.072087665 container kill f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:09:19 localhost podman[314920]: 2025-12-05 10:09:19.91395395 +0000 UTC m=+0.089953102 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:09:19 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:19.922 2 INFO neutron.agent.securitygroups_rpc [None req-6dab5cbb-8c73-47ff-8076-4d4b7501b55c b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['0ad40d69-a471-4ad9-8f7d-a729239c8e8d']#033[00m Dec 5 05:09:19 localhost podman[314920]: 2025-12-05 10:09:19.929723802 +0000 UTC m=+0.105722944 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:09:19 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:09:19 localhost podman[314962]: 2025-12-05 10:09:19.971028625 +0000 UTC m=+0.057885552 container died f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:20 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:20.072 261902 INFO neutron.agent.dhcp.agent [None req-70c7a067-c703-44b1-bdee-7c3dee85d635 - - - - - -] DHCP configuration for ports {'7c3dc46d-7f13-4987-9b51-4425d1d4a2e9'} is completed#033[00m Dec 5 05:09:20 localhost podman[314919]: 2025-12-05 10:09:20.073050455 +0000 UTC m=+0.248669086 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:09:20 localhost podman[314919]: 2025-12-05 10:09:20.109851272 +0000 UTC m=+0.285469913 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:20 localhost podman[314962]: 2025-12-05 10:09:20.111971716 +0000 UTC m=+0.198828623 container cleanup f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:09:20 localhost systemd[1]: libpod-conmon-f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76.scope: Deactivated successfully. Dec 5 05:09:20 localhost podman[314976]: 2025-12-05 10:09:20.12876997 +0000 UTC m=+0.199276426 container remove f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d45c9473-9c2c-4ccf-b293-35b702b04534, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:09:20 localhost podman[239519]: @ - - [05/Dec/2025:10:09:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161582 "" "Go-http-client/1.1" Dec 5 05:09:20 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:20.213 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:20 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:09:20 localhost podman[239519]: @ - - [05/Dec/2025:10:09:20 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20209 "" "Go-http-client/1.1" Dec 5 05:09:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:20.353 2 INFO neutron.agent.securitygroups_rpc [None req-29f332b2-6fbc-4386-b09c-47bdef692327 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:09:20 localhost dnsmasq[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/addn_hosts - 0 addresses Dec 5 05:09:20 localhost dnsmasq-dhcp[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/host Dec 5 05:09:20 localhost podman[315032]: 2025-12-05 10:09:20.578477655 +0000 UTC m=+0.054533309 container kill 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:09:20 localhost dnsmasq-dhcp[314850]: read /var/lib/neutron/dhcp/f56f9804-e9c1-4c76-a615-7cc8c95c5866/opts Dec 5 05:09:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e134 do_prune osdmap full prune enabled Dec 5 05:09:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e135 e135: 6 total, 6 up, 6 in Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.661 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.662 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.662 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:09:20 localhost nova_compute[280228]: 2025-12-05 10:09:20.662 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:09:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e135: 6 total, 6 up, 6 in Dec 5 05:09:20 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:20.733 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:20 localhost systemd[1]: var-lib-containers-storage-overlay-a90e00473d50109359ec191b6f08683b39432de4078efebb1635b0196576783b-merged.mount: Deactivated successfully. Dec 5 05:09:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f21ddddfe6410c059ca038339a4da76f57cdeb9f084f63618de0950dbb287b76-userdata-shm.mount: Deactivated successfully. Dec 5 05:09:20 localhost systemd[1]: run-netns-qdhcp\x2dd45c9473\x2d9c2c\x2d4ccf\x2db293\x2d35b702b04534.mount: Deactivated successfully. Dec 5 05:09:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:20.789 2 INFO neutron.agent.securitygroups_rpc [None req-12e581bc-a85c-4e19-8176-998990ad9ff0 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 150 MiB data, 799 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 2.6 MiB/s wr, 117 op/s Dec 5 05:09:21 localhost dnsmasq[314850]: exiting on receipt of SIGTERM Dec 5 05:09:21 localhost podman[315067]: 2025-12-05 10:09:21.406159761 +0000 UTC m=+0.057970703 container kill 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:09:21 localhost systemd[1]: libpod-0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266.scope: Deactivated successfully. Dec 5 05:09:21 localhost podman[315079]: 2025-12-05 10:09:21.485616412 +0000 UTC m=+0.062416951 container died 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:09:21 localhost systemd[1]: tmp-crun.J5cmou.mount: Deactivated successfully. Dec 5 05:09:21 localhost podman[315079]: 2025-12-05 10:09:21.521808479 +0000 UTC m=+0.098608988 container cleanup 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:21 localhost systemd[1]: libpod-conmon-0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266.scope: Deactivated successfully. Dec 5 05:09:21 localhost podman[315081]: 2025-12-05 10:09:21.566237628 +0000 UTC m=+0.137607881 container remove 0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f56f9804-e9c1-4c76-a615-7cc8c95c5866, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:09:21 localhost ovn_controller[153000]: 2025-12-05T10:09:21Z|00229|binding|INFO|Releasing lport fa0f3a2a-a04e-42cf-bb33-9ee828a83808 from this chassis (sb_readonly=0) Dec 5 05:09:21 localhost ovn_controller[153000]: 2025-12-05T10:09:21Z|00230|binding|INFO|Setting lport fa0f3a2a-a04e-42cf-bb33-9ee828a83808 down in Southbound Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.578 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:21 localhost kernel: device tapfa0f3a2a-a0 left promiscuous mode Dec 5 05:09:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:21.587 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-f56f9804-e9c1-4c76-a615-7cc8c95c5866', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f56f9804-e9c1-4c76-a615-7cc8c95c5866', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4da81250-bc43-41b7-a309-4168568a6167, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fa0f3a2a-a04e-42cf-bb33-9ee828a83808) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:21.589 158820 INFO neutron.agent.ovn.metadata.agent [-] Port fa0f3a2a-a04e-42cf-bb33-9ee828a83808 in datapath f56f9804-e9c1-4c76-a615-7cc8c95c5866 unbound from our chassis#033[00m Dec 5 05:09:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:21.592 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f56f9804-e9c1-4c76-a615-7cc8c95c5866, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:21.593 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[01d53249-b58b-4c41-9388-e3ce8acd5dd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.597 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:21 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:21.641 261902 INFO neutron.agent.dhcp.agent [None req-99d35b37-7854-469f-9ec6-e5aa27f1eb85 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:21 localhost ovn_controller[153000]: 2025-12-05T10:09:21Z|00231|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.769 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:21 localhost systemd[1]: var-lib-containers-storage-overlay-b6f0c18deed076cb374cbb7b5b6439fb5304187a158e4a2b62f8fb52dad74bab-merged.mount: Deactivated successfully. Dec 5 05:09:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a8e1dd3eae2cd47bc2d9d8b3f989c7a75771da519f77deff4b1f02eac725266-userdata-shm.mount: Deactivated successfully. Dec 5 05:09:21 localhost systemd[1]: run-netns-qdhcp\x2df56f9804\x2de9c1\x2d4c76\x2da615\x2d7cc8c95c5866.mount: Deactivated successfully. Dec 5 05:09:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:21.804 2 INFO neutron.agent.securitygroups_rpc [None req-7b7cfca1-161f-4e8b-b5b6-927b633a76ad b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['1b34fa2e-94a2-4613-9c80-5b5c7ff4a67a']#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.879 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.895 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.896 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.896 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.897 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.897 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.898 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.898 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.898 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.917 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 05:09:21 localhost nova_compute[280228]: 2025-12-05 10:09:21.917 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:22 localhost dnsmasq[314581]: exiting on receipt of SIGTERM Dec 5 05:09:22 localhost podman[315126]: 2025-12-05 10:09:22.170614424 +0000 UTC m=+0.056427197 container kill 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:09:22 localhost systemd[1]: libpod-806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020.scope: Deactivated successfully. Dec 5 05:09:22 localhost podman[315140]: 2025-12-05 10:09:22.239707897 +0000 UTC m=+0.054542649 container died 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:09:22 localhost nova_compute[280228]: 2025-12-05 10:09:22.260 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:22 localhost podman[315140]: 2025-12-05 10:09:22.27508496 +0000 UTC m=+0.089919642 container cleanup 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:22 localhost systemd[1]: libpod-conmon-806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020.scope: Deactivated successfully. Dec 5 05:09:22 localhost podman[315142]: 2025-12-05 10:09:22.320340854 +0000 UTC m=+0.127904454 container remove 806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b32052a9-ab3a-4b7b-a7e7-65b5751ed816, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:22 localhost ovn_controller[153000]: 2025-12-05T10:09:22Z|00232|binding|INFO|Releasing lport 61a9f09b-e06b-4f36-94a8-78f2927a92da from this chassis (sb_readonly=0) Dec 5 05:09:22 localhost ovn_controller[153000]: 2025-12-05T10:09:22Z|00233|binding|INFO|Setting lport 61a9f09b-e06b-4f36-94a8-78f2927a92da down in Southbound Dec 5 05:09:22 localhost nova_compute[280228]: 2025-12-05 10:09:22.332 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:22 localhost kernel: device tap61a9f09b-e0 left promiscuous mode Dec 5 05:09:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:22.340 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b32052a9-ab3a-4b7b-a7e7-65b5751ed816', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b32052a9-ab3a-4b7b-a7e7-65b5751ed816', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b042ca58df6348e1a29311c5a517d4d4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32b172b7-ec76-4399-b3ea-1135a9577535, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=61a9f09b-e06b-4f36-94a8-78f2927a92da) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:22.342 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 61a9f09b-e06b-4f36-94a8-78f2927a92da in datapath b32052a9-ab3a-4b7b-a7e7-65b5751ed816 unbound from our chassis#033[00m Dec 5 05:09:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:22.345 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b32052a9-ab3a-4b7b-a7e7-65b5751ed816, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:22.346 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[ea63729a-527f-4c49-8798-75e1f78c9ef0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:22 localhost nova_compute[280228]: 2025-12-05 10:09:22.352 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:22.534 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:22.544 261902 INFO neutron.agent.dhcp.agent [None req-421cd5d0-4a9b-49bd-ba48-6f11dbf5b284 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:22 localhost systemd[1]: var-lib-containers-storage-overlay-26e0bfdeed23f416e507af3db01a176da8801a38027d0e18d6f51daf3b37ad3b-merged.mount: Deactivated successfully. Dec 5 05:09:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-806f096ad4eb2802d19199cb30551dfe97a36c9bdfe559762c787b57eaf37020-userdata-shm.mount: Deactivated successfully. Dec 5 05:09:22 localhost systemd[1]: run-netns-qdhcp\x2db32052a9\x2dab3a\x2d4b7b\x2da7e7\x2d65b5751ed816.mount: Deactivated successfully. Dec 5 05:09:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:22.834 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:22 localhost nova_compute[280228]: 2025-12-05 10:09:22.835 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:22.836 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:09:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:22.850 2 INFO neutron.agent.securitygroups_rpc [None req-07a1f17e-674a-4fbf-8a05-24dee40ce8c0 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:22.926 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 799 MiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 2.5 MiB/s wr, 129 op/s Dec 5 05:09:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:23.537 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:23 localhost nova_compute[280228]: 2025-12-05 10:09:23.925 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:23 localhost ovn_controller[153000]: 2025-12-05T10:09:23Z|00234|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:09:24 localhost nova_compute[280228]: 2025-12-05 10:09:24.003 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:24 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:24.098 2 INFO neutron.agent.securitygroups_rpc [None req-5a43dc0f-43ca-475b-a61a-0ce576c54084 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:09:24 localhost nova_compute[280228]: 2025-12-05 10:09:24.299 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:24 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:24.420 2 INFO neutron.agent.securitygroups_rpc [None req-777dc251-951f-44b9-a95f-17b8e4f65f88 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['b457efc9-4d11-469a-b4d4-526496f62886']#033[00m Dec 5 05:09:24 localhost nova_compute[280228]: 2025-12-05 10:09:24.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:24 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:24.794 2 INFO neutron.agent.securitygroups_rpc [None req-dba7a388-5a0b-4bb8-97c5-15372240e74f 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:25 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:25.068 2 INFO neutron.agent.securitygroups_rpc [None req-6c302ac3-77ee-4afa-9762-b4a5f48fbd2f 6cc3a7e9b1614bb8bd6dd7f5659d79b5 9911350e2d5148098ee9d947cc452035 - - default default] Security group member updated ['69a903c5-0a73-4330-a64c-53b150f2fa02']#033[00m Dec 5 05:09:25 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:25.126 2 INFO neutron.agent.securitygroups_rpc [None req-3c0d38e2-3a8d-4cf5-a3f0-f0f035f094ce b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['b457efc9-4d11-469a-b4d4-526496f62886']#033[00m Dec 5 05:09:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 799 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 2.1 MiB/s wr, 110 op/s Dec 5 05:09:25 localhost nova_compute[280228]: 2025-12-05 10:09:25.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:09:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:25 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:25.660 2 INFO neutron.agent.securitygroups_rpc [None req-c2343547-ec5f-4fdd-903e-6d652b2fffe4 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:27 localhost openstack_network_exporter[241668]: ERROR 10:09:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:09:27 localhost openstack_network_exporter[241668]: ERROR 10:09:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:09:27 localhost openstack_network_exporter[241668]: ERROR 10:09:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:09:27 localhost openstack_network_exporter[241668]: ERROR 10:09:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:09:27 localhost openstack_network_exporter[241668]: Dec 5 05:09:27 localhost openstack_network_exporter[241668]: ERROR 10:09:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:09:27 localhost openstack_network_exporter[241668]: Dec 5 05:09:27 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:27.207 2 INFO neutron.agent.securitygroups_rpc [None req-693f3af2-582b-4d0d-bc83-33366519a3f6 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['d823dd66-f2af-41ad-b25d-23924a6bc812']#033[00m Dec 5 05:09:27 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:27.238 2 INFO neutron.agent.securitygroups_rpc [None req-03ae0f1a-18c3-4217-838d-d302f124e4eb 5eec71af41824815ba824bf807f8179b 70f3c241260c4833846cef3d99a05e88 - - default default] Security group member updated ['f7939a6f-eb31-4565-924a-cc0204206297']#033[00m Dec 5 05:09:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 1.9 MiB/s wr, 54 op/s Dec 5 05:09:27 localhost nova_compute[280228]: 2025-12-05 10:09:27.264 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:27 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:27.872 2 INFO neutron.agent.securitygroups_rpc [None req-bce7fb8d-315f-4229-8c78-5151cefbd755 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['d823dd66-f2af-41ad-b25d-23924a6bc812']#033[00m Dec 5 05:09:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:09:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:09:29 localhost podman[315171]: 2025-12-05 10:09:29.193402049 +0000 UTC m=+0.082452033 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:09:29 localhost podman[315171]: 2025-12-05 10:09:29.208728518 +0000 UTC m=+0.097778492 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3) Dec 5 05:09:29 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:09:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 1.9 MiB/s wr, 54 op/s Dec 5 05:09:29 localhost podman[315172]: 2025-12-05 10:09:29.300228096 +0000 UTC m=+0.186806744 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Dec 5 05:09:29 localhost nova_compute[280228]: 2025-12-05 10:09:29.301 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:29 localhost podman[315172]: 2025-12-05 10:09:29.312172022 +0000 UTC m=+0.198750650 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public) Dec 5 05:09:29 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:09:29 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:29.442 2 INFO neutron.agent.securitygroups_rpc [None req-eae8b647-71bf-47cb-9ee2-d55d01c759ad 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:30 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:30.454 2 INFO neutron.agent.securitygroups_rpc [None req-05412d32-f512-457f-b3b7-2412701ff157 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']#033[00m Dec 5 05:09:30 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:30.461 2 INFO neutron.agent.securitygroups_rpc [None req-3f0eca8f-f70c-4289-bab2-b0e4a184fb52 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:30 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:30.910 2 INFO neutron.agent.securitygroups_rpc [None req-a7e7c5af-f839-4053-b22d-d720a4983b2b 5eec71af41824815ba824bf807f8179b 70f3c241260c4833846cef3d99a05e88 - - default default] Security group member updated ['f7939a6f-eb31-4565-924a-cc0204206297']#033[00m Dec 5 05:09:31 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:31.014 2 INFO neutron.agent.securitygroups_rpc [None req-f4184f38-8487-436e-bbf0-7c582f628481 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']#033[00m Dec 5 05:09:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s Dec 5 05:09:31 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:31.403 2 INFO neutron.agent.securitygroups_rpc [None req-304c582d-a7b0-44d0-b355-f13cb8cd14b5 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']#033[00m Dec 5 05:09:31 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:31.530 2 INFO neutron.agent.securitygroups_rpc [None req-4fc067df-7f69-4cc4-bbb3-12ab0516a717 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e135 do_prune osdmap full prune enabled Dec 5 05:09:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e136 e136: 6 total, 6 up, 6 in Dec 5 05:09:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e136: 6 total, 6 up, 6 in Dec 5 05:09:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:31.838 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:09:31 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:31.916 2 INFO neutron.agent.securitygroups_rpc [None req-6b25967d-8b19-48ec-be6b-a391f88e56ec b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']#033[00m Dec 5 05:09:32 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:32.190 2 INFO neutron.agent.securitygroups_rpc [None req-d7e347b7-2902-4a8d-8b05-84e3cf6b8a04 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:32 localhost nova_compute[280228]: 2025-12-05 10:09:32.289 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:32 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:32.354 2 INFO neutron.agent.securitygroups_rpc [None req-bc302c88-25bf-415a-9ece-bc84897b3bcd b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']#033[00m Dec 5 05:09:32 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:32.739 2 INFO neutron.agent.securitygroups_rpc [None req-ce798883-5bd6-4fdd-b0b2-80ccf684cae7 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['65574394-889b-4f60-b5a6-861ef606986b']#033[00m Dec 5 05:09:32 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:32.934 2 INFO neutron.agent.securitygroups_rpc [None req-78849ced-a6e2-4c6b-9614-71334bc7a8a0 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:33 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:33.049 2 INFO neutron.agent.securitygroups_rpc [None req-7235ade3-2e79-4200-8541-afc31859d671 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s Dec 5 05:09:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e136 do_prune osdmap full prune enabled Dec 5 05:09:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e137 e137: 6 total, 6 up, 6 in Dec 5 05:09:33 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e137: 6 total, 6 up, 6 in Dec 5 05:09:33 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:33.836 2 INFO neutron.agent.securitygroups_rpc [None req-3cf9815e-5a3f-4b4e-977e-5bb4bc2def96 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:33 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:33.893 2 INFO neutron.agent.securitygroups_rpc [None req-ba1a05d7-37ec-4f0e-9155-41d8f8a97604 b84a2c64c68d46e79b15d8ef03d73b26 012f487dbd0649d78abde870bf15da89 - - default default] Security group rule updated ['356b635b-6a2c-4448-9058-ed20bc39caf9']#033[00m Dec 5 05:09:34 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:34.242 2 INFO neutron.agent.securitygroups_rpc [None req-f40f6030-be3c-4d04-a30b-4ef8ffdee0d1 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:34 localhost nova_compute[280228]: 2025-12-05 10:09:34.306 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:34 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:34.326 2 INFO neutron.agent.securitygroups_rpc [None req-785d1cea-a7c3-4660-ad42-92893080a7f7 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:09:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e137 do_prune osdmap full prune enabled Dec 5 05:09:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e138 e138: 6 total, 6 up, 6 in Dec 5 05:09:34 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e138: 6 total, 6 up, 6 in Dec 5 05:09:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.5 KiB/s wr, 19 op/s Dec 5 05:09:35 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:35.349 2 INFO neutron.agent.securitygroups_rpc [None req-e470bad7-133a-4174-b852-1f35387ae7ea 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e138 do_prune osdmap full prune enabled Dec 5 05:09:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e139 e139: 6 total, 6 up, 6 in Dec 5 05:09:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e139: 6 total, 6 up, 6 in Dec 5 05:09:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 6.3 KiB/s wr, 80 op/s Dec 5 05:09:37 localhost nova_compute[280228]: 2025-12-05 10:09:37.292 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:38 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:38.922 261902 INFO neutron.agent.linux.ip_lib [None req-240da17b-a806-4314-ba2d-4a8e218fea5d - - - - - -] Device tap63289619-38 cannot be used as it has no MAC address#033[00m Dec 5 05:09:38 localhost nova_compute[280228]: 2025-12-05 10:09:38.944 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:38 localhost kernel: device tap63289619-38 entered promiscuous mode Dec 5 05:09:38 localhost ovn_controller[153000]: 2025-12-05T10:09:38Z|00235|binding|INFO|Claiming lport 63289619-3804-4940-b2c6-7d9b519bc720 for this chassis. Dec 5 05:09:38 localhost ovn_controller[153000]: 2025-12-05T10:09:38Z|00236|binding|INFO|63289619-3804-4940-b2c6-7d9b519bc720: Claiming unknown Dec 5 05:09:38 localhost nova_compute[280228]: 2025-12-05 10:09:38.953 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:38 localhost NetworkManager[5960]: [1764929378.9536] manager: (tap63289619-38): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Dec 5 05:09:38 localhost systemd-udevd[315221]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:09:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:38.963 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a18433-2c66-4fbf-a647-567b3b059428, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=63289619-3804-4940-b2c6-7d9b519bc720) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:38.966 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 63289619-3804-4940-b2c6-7d9b519bc720 in datapath 202b91b7-8951-4676-ac11-d844ae9d9927 bound to our chassis#033[00m Dec 5 05:09:38 localhost nova_compute[280228]: 2025-12-05 10:09:38.969 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:38.971 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port a3a51940-5432-4c3e-a612-11a90addb16a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:09:38 localhost ovn_controller[153000]: 2025-12-05T10:09:38Z|00237|binding|INFO|Setting lport 63289619-3804-4940-b2c6-7d9b519bc720 ovn-installed in OVS Dec 5 05:09:38 localhost ovn_controller[153000]: 2025-12-05T10:09:38Z|00238|binding|INFO|Setting lport 63289619-3804-4940-b2c6-7d9b519bc720 up in Southbound Dec 5 05:09:38 localhost nova_compute[280228]: 2025-12-05 10:09:38.973 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:38.972 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 202b91b7-8951-4676-ac11-d844ae9d9927, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:38.974 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[6691310e-df45-4227-a9dc-b2acdc18eb79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:38 localhost nova_compute[280228]: 2025-12-05 10:09:38.986 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:38 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:38 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:38 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:39 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:39 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:39 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:39 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:39 localhost journal[228791]: ethtool ioctl error on tap63289619-38: No such device Dec 5 05:09:39 localhost nova_compute[280228]: 2025-12-05 10:09:39.023 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:39 localhost nova_compute[280228]: 2025-12-05 10:09:39.049 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 5.8 KiB/s wr, 74 op/s Dec 5 05:09:39 localhost nova_compute[280228]: 2025-12-05 10:09:39.307 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:39 localhost podman[315292]: Dec 5 05:09:39 localhost podman[315292]: 2025-12-05 10:09:39.943122687 +0000 UTC m=+0.066761682 container create 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:39 localhost systemd[1]: Started libpod-conmon-4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f.scope. Dec 5 05:09:40 localhost podman[315292]: 2025-12-05 10:09:39.90757079 +0000 UTC m=+0.031209765 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:40 localhost systemd[1]: Started libcrun container. Dec 5 05:09:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d494a4898f1f3a6a9e1e7fafb12eac6472c62462526f7eb7fa9e278fe019b864/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:40 localhost podman[315292]: 2025-12-05 10:09:40.030059226 +0000 UTC m=+0.153698201 container init 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:09:40 localhost podman[315292]: 2025-12-05 10:09:40.039379112 +0000 UTC m=+0.163018087 container start 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:40 localhost dnsmasq[315310]: started, version 2.85 cachesize 150 Dec 5 05:09:40 localhost dnsmasq[315310]: DNS service limited to local subnets Dec 5 05:09:40 localhost dnsmasq[315310]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:40 localhost dnsmasq[315310]: warning: no upstream servers configured Dec 5 05:09:40 localhost dnsmasq-dhcp[315310]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:09:40 localhost dnsmasq[315310]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 0 addresses Dec 5 05:09:40 localhost dnsmasq-dhcp[315310]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:40 localhost dnsmasq-dhcp[315310]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e139 do_prune osdmap full prune enabled Dec 5 05:09:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e140 e140: 6 total, 6 up, 6 in Dec 5 05:09:40 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e140: 6 total, 6 up, 6 in Dec 5 05:09:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 8.0 KiB/s wr, 113 op/s Dec 5 05:09:41 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:41.944 261902 INFO neutron.agent.dhcp.agent [None req-8b279612-0069-444b-b415-4f0f9505febb - - - - - -] DHCP configuration for ports {'83bc4f64-fd82-46c7-b04c-f3f7fd3b951d'} is completed#033[00m Dec 5 05:09:42 localhost dnsmasq[315310]: exiting on receipt of SIGTERM Dec 5 05:09:42 localhost podman[315329]: 2025-12-05 10:09:42.158376554 +0000 UTC m=+0.057187770 container kill 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 5 05:09:42 localhost systemd[1]: libpod-4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f.scope: Deactivated successfully. Dec 5 05:09:42 localhost podman[315342]: 2025-12-05 10:09:42.235216155 +0000 UTC m=+0.062624937 container died 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 5 05:09:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f-userdata-shm.mount: Deactivated successfully. Dec 5 05:09:42 localhost podman[315342]: 2025-12-05 10:09:42.27003739 +0000 UTC m=+0.097446152 container cleanup 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:09:42 localhost systemd[1]: libpod-conmon-4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f.scope: Deactivated successfully. Dec 5 05:09:42 localhost podman[315344]: 2025-12-05 10:09:42.310464186 +0000 UTC m=+0.130618976 container remove 4872c5041a60b24b12d9fb4d5bc98accc455fed52f336ed717f2ad54d3de9c0f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:09:42 localhost nova_compute[280228]: 2025-12-05 10:09:42.320 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:42 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:42.794 261902 INFO neutron.agent.linux.ip_lib [None req-8fdab608-d221-41cd-9b6e-c589a2198ede - - - - - -] Device tape95ff63a-aa cannot be used as it has no MAC address#033[00m Dec 5 05:09:42 localhost nova_compute[280228]: 2025-12-05 10:09:42.825 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:42 localhost kernel: device tape95ff63a-aa entered promiscuous mode Dec 5 05:09:42 localhost ovn_controller[153000]: 2025-12-05T10:09:42Z|00239|binding|INFO|Claiming lport e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a for this chassis. Dec 5 05:09:42 localhost ovn_controller[153000]: 2025-12-05T10:09:42Z|00240|binding|INFO|e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a: Claiming unknown Dec 5 05:09:42 localhost nova_compute[280228]: 2025-12-05 10:09:42.834 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:42 localhost NetworkManager[5960]: [1764929382.8356] manager: (tape95ff63a-aa): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Dec 5 05:09:42 localhost systemd-udevd[315382]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.844 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-8a407ded-58d9-4221-8346-d297c9244eac', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a407ded-58d9-4221-8346-d297c9244eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e4fe9762324442459e16cd8ca78e7d20', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdd725c-368f-47c1-9779-ad865a815712, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.846 158820 INFO neutron.agent.ovn.metadata.agent [-] Port e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a in datapath 8a407ded-58d9-4221-8346-d297c9244eac bound to our chassis#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.848 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a407ded-58d9-4221-8346-d297c9244eac or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.849 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[1f012f30-3a54-4a5a-b986-c5063613f07f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.865 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:0b:c8 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a18433-2c66-4fbf-a647-567b3b059428, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=83bc4f64-fd82-46c7-b04c-f3f7fd3b951d) old=Port_Binding(mac=['fa:16:3e:2a:0b:c8 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.868 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 83bc4f64-fd82-46c7-b04c-f3f7fd3b951d in datapath 202b91b7-8951-4676-ac11-d844ae9d9927 updated#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.871 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port a3a51940-5432-4c3e-a612-11a90addb16a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.871 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 202b91b7-8951-4676-ac11-d844ae9d9927, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:42.872 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[44a9d9ad-db53-4069-ad64-3411a5f6131c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:42 localhost ovn_controller[153000]: 2025-12-05T10:09:42Z|00241|binding|INFO|Setting lport e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a ovn-installed in OVS Dec 5 05:09:42 localhost ovn_controller[153000]: 2025-12-05T10:09:42Z|00242|binding|INFO|Setting lport e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a up in Southbound Dec 5 05:09:42 localhost nova_compute[280228]: 2025-12-05 10:09:42.880 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost journal[228791]: ethtool ioctl error on tape95ff63a-aa: No such device Dec 5 05:09:42 localhost nova_compute[280228]: 2025-12-05 10:09:42.924 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:42 localhost nova_compute[280228]: 2025-12-05 10:09:42.955 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:43 localhost systemd[1]: var-lib-containers-storage-overlay-d494a4898f1f3a6a9e1e7fafb12eac6472c62462526f7eb7fa9e278fe019b864-merged.mount: Deactivated successfully. Dec 5 05:09:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:09:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 6.5 KiB/s wr, 107 op/s Dec 5 05:09:43 localhost podman[315420]: 2025-12-05 10:09:43.285085576 +0000 UTC m=+0.090878180 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 05:09:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:09:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:09:43 localhost podman[315420]: 2025-12-05 10:09:43.307652276 +0000 UTC m=+0.113444820 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:43 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:09:43 localhost podman[315443]: 2025-12-05 10:09:43.383129455 +0000 UTC m=+0.081125972 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:09:43 localhost podman[315443]: 2025-12-05 10:09:43.393353758 +0000 UTC m=+0.091350285 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:09:43 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:09:43 localhost podman[315444]: 2025-12-05 10:09:43.441605324 +0000 UTC m=+0.135540237 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 5 05:09:43 localhost podman[315444]: 2025-12-05 10:09:43.445525244 +0000 UTC m=+0.139460177 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:09:43 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:09:43 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:43.609 2 INFO neutron.agent.securitygroups_rpc [None req-9167f274-b528-4220-b67e-800e25b79970 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:43 localhost podman[315511]: Dec 5 05:09:43 localhost podman[315511]: 2025-12-05 10:09:43.745095967 +0000 UTC m=+0.062923416 container create 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:09:43 localhost systemd[1]: Started libpod-conmon-0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843.scope. Dec 5 05:09:43 localhost systemd[1]: Started libcrun container. Dec 5 05:09:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d5276eb3381dad37e42f81db0a34eb6d80de189ac087f3677b12c2058dce2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:43 localhost podman[315511]: 2025-12-05 10:09:43.711366495 +0000 UTC m=+0.029193904 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:43 localhost podman[315511]: 2025-12-05 10:09:43.812337914 +0000 UTC m=+0.130165453 container init 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:09:43 localhost podman[315511]: 2025-12-05 10:09:43.821551675 +0000 UTC m=+0.139379134 container start 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:09:43 localhost dnsmasq[315541]: started, version 2.85 cachesize 150 Dec 5 05:09:43 localhost dnsmasq[315541]: DNS service limited to local subnets Dec 5 05:09:43 localhost dnsmasq[315541]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:43 localhost dnsmasq[315541]: warning: no upstream servers configured Dec 5 05:09:43 localhost dnsmasq-dhcp[315541]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:09:43 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 0 addresses Dec 5 05:09:43 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:43 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:43.896 261902 INFO neutron.agent.dhcp.agent [None req-8fdab608-d221-41cd-9b6e-c589a2198ede - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c17002ef-6c43-42fa-bc29-a0b42329e4d4, ip_allocation=immediate, mac_address=fa:16:3e:10:20:19, name=tempest-AllowedAddressPairIpV6TestJSON-1638456202, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:37Z, description=, dns_domain=, id=8a407ded-58d9-4221-8346-d297c9244eac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-812711989, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1853, status=ACTIVE, subnets=['dc4232c0-b0b9-4249-af2b-cb60847d53c2'], tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=8a407ded-58d9-4221-8346-d297c9244eac, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['01225a20-d932-440b-b264-e7c785b61e12'], standard_attr_id=1887, status=DOWN, tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:42Z on network 8a407ded-58d9-4221-8346-d297c9244eac#033[00m Dec 5 05:09:44 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 1 addresses Dec 5 05:09:44 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:44 localhost podman[315570]: 2025-12-05 10:09:44.067658083 +0000 UTC m=+0.061072669 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:09:44 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:44 localhost systemd[1]: tmp-crun.Xzg41c.mount: Deactivated successfully. Dec 5 05:09:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:44.313 261902 INFO neutron.agent.dhcp.agent [None req-ce4af2e1-ba8d-4856-9c48-d0ca045d5f41 - - - - - -] DHCP configuration for ports {'6ed62260-5e86-46ca-aa63-d8ef627acde6'} is completed#033[00m Dec 5 05:09:44 localhost nova_compute[280228]: 2025-12-05 10:09:44.351 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:44.470 261902 INFO neutron.agent.dhcp.agent [None req-6f414cc0-1d68-4001-8975-d77564676ebe - - - - - -] DHCP configuration for ports {'c17002ef-6c43-42fa-bc29-a0b42329e4d4'} is completed#033[00m Dec 5 05:09:44 localhost podman[315619]: Dec 5 05:09:44 localhost podman[315619]: 2025-12-05 10:09:44.733969094 +0000 UTC m=+0.087828068 container create cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:44 localhost systemd[1]: Started libpod-conmon-cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138.scope. Dec 5 05:09:44 localhost podman[315619]: 2025-12-05 10:09:44.688454031 +0000 UTC m=+0.042312985 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:44 localhost systemd[1]: Started libcrun container. Dec 5 05:09:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9e8b1c8dba2c84cd4d1bc0d9e4170f112cf5f93cf75ed34ec573c90944b8bcb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:44 localhost podman[315619]: 2025-12-05 10:09:44.807591055 +0000 UTC m=+0.161450019 container init cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:09:44 localhost podman[315619]: 2025-12-05 10:09:44.816660432 +0000 UTC m=+0.170519426 container start cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:44 localhost dnsmasq[315637]: started, version 2.85 cachesize 150 Dec 5 05:09:44 localhost dnsmasq[315637]: DNS service limited to local subnets Dec 5 05:09:44 localhost dnsmasq[315637]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:44 localhost dnsmasq[315637]: warning: no upstream servers configured Dec 5 05:09:44 localhost dnsmasq-dhcp[315637]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 5 05:09:44 localhost dnsmasq-dhcp[315637]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:09:44 localhost dnsmasq[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 0 addresses Dec 5 05:09:44 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:44 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:44 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:44.923 2 INFO neutron.agent.securitygroups_rpc [None req-5113bc8c-3b86-4908-a95b-dea9f14ab9cb 71a1dc47c8964743be7069896a3eb55e f6db8cbac53645ef9430332056699027 - - default default] Security group member updated ['007023a4-4e47-4b10-af03-71f50f200b06']#033[00m Dec 5 05:09:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:44.976 261902 INFO neutron.agent.dhcp.agent [None req-649e1401-4444-4d23-bb2e-8c7852cd4729 - - - - - -] DHCP configuration for ports {'63289619-3804-4940-b2c6-7d9b519bc720', '83bc4f64-fd82-46c7-b04c-f3f7fd3b951d'} is completed#033[00m Dec 5 05:09:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:09:45 Dec 5 05:09:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:09:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:09:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['images', 'volumes', 'manila_data', 'manila_metadata', 'backups', 'vms', '.mgr'] Dec 5 05:09:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:09:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:09:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:09:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:09:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:09:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:09:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:09:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 48 op/s Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:09:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:09:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:09:45 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:45.629 2 INFO neutron.agent.securitygroups_rpc [None req-6135443b-d502-4e12-9a62-2eade4e6d3ea 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e140 do_prune osdmap full prune enabled Dec 5 05:09:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 e141: 6 total, 6 up, 6 in Dec 5 05:09:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e141: 6 total, 6 up, 6 in Dec 5 05:09:45 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:45.693 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cb85e020-af48-4192-aeab-0981176ad947, ip_allocation=immediate, mac_address=fa:16:3e:4b:27:ac, name=tempest-AllowedAddressPairIpV6TestJSON-693669803, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:37Z, description=, dns_domain=, id=8a407ded-58d9-4221-8346-d297c9244eac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-812711989, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1853, status=ACTIVE, subnets=['dc4232c0-b0b9-4249-af2b-cb60847d53c2'], tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=8a407ded-58d9-4221-8346-d297c9244eac, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['01225a20-d932-440b-b264-e7c785b61e12'], standard_attr_id=1897, status=DOWN, tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:45Z on network 8a407ded-58d9-4221-8346-d297c9244eac#033[00m Dec 5 05:09:45 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:45.831 2 INFO neutron.agent.securitygroups_rpc [None req-fe53869a-662d-4fc8-99be-d69dbe8fa9ee 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:45 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 2 addresses Dec 5 05:09:45 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:45 localhost podman[315657]: 2025-12-05 10:09:45.892375465 +0000 UTC m=+0.060505532 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:09:45 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:45 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:45.951 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:44Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=7b8154ca-bcdb-4e57-aeea-9abb9b19fda3, ip_allocation=immediate, mac_address=fa:16:3e:4f:aa:c7, name=tempest-PortsTestJSON-897525491, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:35Z, description=, dns_domain=, id=202b91b7-8951-4676-ac11-d844ae9d9927, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-392176401, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55495, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=1831, status=ACTIVE, subnets=['05c1186c-5c4d-4f8d-9b6b-94550d5d52d8', '5dee4710-43d9-489f-b22a-08b0e65f37bc'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:38Z, vlan_transparent=None, network_id=202b91b7-8951-4676-ac11-d844ae9d9927, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=1896, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:45Z on network 202b91b7-8951-4676-ac11-d844ae9d9927#033[00m Dec 5 05:09:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:46.140 261902 INFO neutron.agent.dhcp.agent [None req-83d7ec27-9171-49bf-92d3-1f75a0cc58e3 - - - - - -] DHCP configuration for ports {'cb85e020-af48-4192-aeab-0981176ad947'} is completed#033[00m Dec 5 05:09:46 localhost systemd[1]: tmp-crun.NnaYsU.mount: Deactivated successfully. Dec 5 05:09:46 localhost dnsmasq[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 2 addresses Dec 5 05:09:46 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:46 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:46 localhost podman[315692]: 2025-12-05 10:09:46.245230248 +0000 UTC m=+0.074017494 container kill cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:46.495 261902 INFO neutron.agent.dhcp.agent [None req-aef20415-29a3-49ff-9289-666c9ad688c4 - - - - - -] DHCP configuration for ports {'7b8154ca-bcdb-4e57-aeea-9abb9b19fda3'} is completed#033[00m Dec 5 05:09:46 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:46.532 2 INFO neutron.agent.securitygroups_rpc [None req-d97a5b35-e515-4748-b589-0a6808590190 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:46 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 1 addresses Dec 5 05:09:46 localhost podman[315732]: 2025-12-05 10:09:46.771936048 +0000 UTC m=+0.059564512 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:09:46 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:46 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:47 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:47.144 2 INFO neutron.agent.securitygroups_rpc [None req-473cdf56-0e01-4e1c-a4f3-92d7eeef2d53 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:47.168 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7b8154ca-bcdb-4e57-aeea-9abb9b19fda3, ip_allocation=immediate, mac_address=fa:16:3e:4f:aa:c7, name=tempest-PortsTestJSON-897525491, network_id=202b91b7-8951-4676-ac11-d844ae9d9927, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=1896, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:46Z on network 202b91b7-8951-4676-ac11-d844ae9d9927#033[00m Dec 5 05:09:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 2.1 KiB/s wr, 50 op/s Dec 5 05:09:47 localhost nova_compute[280228]: 2025-12-05 10:09:47.323 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:47 localhost dnsmasq[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 1 addresses Dec 5 05:09:47 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:47 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:47 localhost podman[315767]: 2025-12-05 10:09:47.551963987 +0000 UTC m=+0.045317648 container kill cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:47.754 261902 INFO neutron.agent.dhcp.agent [None req-12133298-5b98-4704-9abe-11ec07e0f07e - - - - - -] DHCP configuration for ports {'7b8154ca-bcdb-4e57-aeea-9abb9b19fda3'} is completed#033[00m Dec 5 05:09:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:09:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:09:47 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:47.800+0000 7f996f03a640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:47.800+0000 7f996f03a640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:47.800+0000 7f996f03a640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:47.800+0000 7f996f03a640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:47.800+0000 7f996f03a640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:09:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:09:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:09:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "format": "json"}]: dispatch Dec 5 05:09:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:09:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:09:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:09:47 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:09:47 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:47.898 2 INFO neutron.agent.securitygroups_rpc [None req-9d4ec5a5-1538-4ac4-baa6-e4ad3b85127c 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:48.006 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=23d34c3a-dbd1-4582-992d-b7b147ff875b, ip_allocation=immediate, mac_address=fa:16:3e:ff:63:0e, name=tempest-AllowedAddressPairIpV6TestJSON-1496864829, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:37Z, description=, dns_domain=, id=8a407ded-58d9-4221-8346-d297c9244eac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-812711989, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1853, status=ACTIVE, subnets=['dc4232c0-b0b9-4249-af2b-cb60847d53c2'], tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=8a407ded-58d9-4221-8346-d297c9244eac, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['01225a20-d932-440b-b264-e7c785b61e12'], standard_attr_id=1910, status=DOWN, tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:47Z on network 8a407ded-58d9-4221-8346-d297c9244eac#033[00m Dec 5 05:09:48 localhost podman[315821]: 2025-12-05 10:09:48.189064644 +0000 UTC m=+0.053704074 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:09:48 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 2 addresses Dec 5 05:09:48 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:48 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:48.408 261902 INFO neutron.agent.dhcp.agent [None req-2246adb8-01f5-40c3-8b5d-f603b7c325d3 - - - - - -] DHCP configuration for ports {'23d34c3a-dbd1-4582-992d-b7b147ff875b'} is completed#033[00m Dec 5 05:09:48 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:48.712 2 INFO neutron.agent.securitygroups_rpc [None req-27722011-69f1-466e-9f91-5c5a0d5b82ea b1808a8aad0b4360880c390bd8362a00 74a17deba84b470ebe240fab8c99b64c - - default default] Security group member updated ['f197cddb-ae8c-4586-ad83-8ebb4f64e04c']#033[00m Dec 5 05:09:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:49.004 2 INFO neutron.agent.securitygroups_rpc [None req-957875af-a66a-4345-aa18-8b11fd11a0a9 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:49.047 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:44Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=7b8154ca-bcdb-4e57-aeea-9abb9b19fda3, ip_allocation=immediate, mac_address=fa:16:3e:4f:aa:c7, name=tempest-PortsTestJSON-897525491, network_id=202b91b7-8951-4676-ac11-d844ae9d9927, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=1896, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:09:48Z on network 202b91b7-8951-4676-ac11-d844ae9d9927#033[00m Dec 5 05:09:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 47 op/s Dec 5 05:09:49 localhost dnsmasq[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 2 addresses Dec 5 05:09:49 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:49 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:49 localhost podman[315860]: 2025-12-05 10:09:49.295491145 +0000 UTC m=+0.058399597 container kill cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:09:49 localhost nova_compute[280228]: 2025-12-05 10:09:49.397 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.432013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389432059, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2261, "num_deletes": 256, "total_data_size": 2548313, "memory_usage": 2596080, "flush_reason": "Manual Compaction"} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389450307, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 2476976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26119, "largest_seqno": 28379, "table_properties": {"data_size": 2467689, "index_size": 5792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20212, "raw_average_key_size": 21, "raw_value_size": 2448692, "raw_average_value_size": 2577, "num_data_blocks": 249, "num_entries": 950, "num_filter_entries": 950, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929217, "oldest_key_time": 1764929217, "file_creation_time": 1764929389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18364 microseconds, and 8123 cpu microseconds. Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.450374) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 2476976 bytes OK Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.450401) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.451971) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.451989) EVENT_LOG_v1 {"time_micros": 1764929389451983, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.452012) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2538866, prev total WAL file size 2538866, number of live WAL files 2. Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.453011) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(2418KB)], [45(15MB)] Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389453045, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 18984887, "oldest_snapshot_seqno": -1} Dec 5 05:09:49 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e49: np0005546419.zhsnqq(active, since 8m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12493 keys, 17029199 bytes, temperature: kUnknown Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389543093, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 17029199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16957515, "index_size": 39327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31301, "raw_key_size": 333975, "raw_average_key_size": 26, "raw_value_size": 16744345, "raw_average_value_size": 1340, "num_data_blocks": 1498, "num_entries": 12493, "num_filter_entries": 12493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.543526) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 17029199 bytes Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.545134) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.3 rd, 188.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 15.7 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(14.5) write-amplify(6.9) OK, records in: 13030, records dropped: 537 output_compression: NoCompression Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.545161) EVENT_LOG_v1 {"time_micros": 1764929389545148, "job": 26, "event": "compaction_finished", "compaction_time_micros": 90255, "compaction_time_cpu_micros": 45507, "output_level": 6, "num_output_files": 1, "total_output_size": 17029199, "num_input_records": 13030, "num_output_records": 12493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389545768, "job": 26, "event": "table_file_deletion", "file_number": 47} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929389549646, "job": 26, "event": "table_file_deletion", "file_number": 45} Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.452907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.549722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.549730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.549733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.549736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:09:49 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:09:49.549739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:09:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:49.604 261902 INFO neutron.agent.dhcp.agent [None req-bec2751f-6154-4341-8845-bb1eca26145d - - - - - -] DHCP configuration for ports {'7b8154ca-bcdb-4e57-aeea-9abb9b19fda3'} is completed#033[00m Dec 5 05:09:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:49.845 2 INFO neutron.agent.securitygroups_rpc [None req-cce2b7c3-75f1-4ea1-897e-6d61b0525126 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:49 localhost podman[239519]: time="2025-12-05T10:09:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:09:49 localhost podman[239519]: @ - - [05/Dec/2025:10:09:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159857 "" "Go-http-client/1.1" Dec 5 05:09:49 localhost podman[239519]: @ - - [05/Dec/2025:10:09:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20196 "" "Go-http-client/1.1" Dec 5 05:09:50 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 1 addresses Dec 5 05:09:50 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:50 localhost podman[315898]: 2025-12-05 10:09:50.10154999 +0000 UTC m=+0.061077059 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:50 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:09:50 localhost podman[315910]: 2025-12-05 10:09:50.217567478 +0000 UTC m=+0.096801572 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:09:50 localhost podman[315910]: 2025-12-05 10:09:50.230668699 +0000 UTC m=+0.109902833 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:09:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:09:50 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:09:50 localhost podman[315940]: 2025-12-05 10:09:50.353233587 +0000 UTC m=+0.093670765 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:09:50 localhost podman[315940]: 2025-12-05 10:09:50.41870427 +0000 UTC m=+0.159141418 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:09:50 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:09:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:50.643 2 INFO neutron.agent.securitygroups_rpc [None req-029f581c-71e2-4ce7-b914-2c137b7b6c2a 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:09:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:50 localhost dnsmasq[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 0 addresses Dec 5 05:09:50 localhost podman[315983]: 2025-12-05 10:09:50.916227738 +0000 UTC m=+0.048533616 container kill cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:50 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:50 localhost dnsmasq-dhcp[315637]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 2.5 KiB/s wr, 13 op/s Dec 5 05:09:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:51.411 2 INFO neutron.agent.securitygroups_rpc [None req-af3b553c-8339-4731-9e0d-445f98eb7889 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:51.500 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:50Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a094e4e1-dfeb-4cf6-9d0b-1e1efeb0da79, ip_allocation=immediate, mac_address=fa:16:3e:45:b6:d5, name=tempest-AllowedAddressPairIpV6TestJSON-1641294971, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:37Z, description=, dns_domain=, id=8a407ded-58d9-4221-8346-d297c9244eac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-812711989, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1853, status=ACTIVE, subnets=['dc4232c0-b0b9-4249-af2b-cb60847d53c2'], tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=8a407ded-58d9-4221-8346-d297c9244eac, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['01225a20-d932-440b-b264-e7c785b61e12'], standard_attr_id=1916, status=DOWN, tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:50Z on network 8a407ded-58d9-4221-8346-d297c9244eac#033[00m Dec 5 05:09:51 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 2 addresses Dec 5 05:09:51 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:51 localhost podman[316021]: 2025-12-05 10:09:51.67774322 +0000 UTC m=+0.050356181 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:09:51 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:51.969 261902 INFO neutron.agent.dhcp.agent [None req-b7659c01-7f62-4139-ab4b-ec69bc3a9ba2 - - - - - -] DHCP configuration for ports {'a094e4e1-dfeb-4cf6-9d0b-1e1efeb0da79'} is completed#033[00m Dec 5 05:09:52 localhost dnsmasq[315637]: exiting on receipt of SIGTERM Dec 5 05:09:52 localhost podman[316059]: 2025-12-05 10:09:52.170560053 +0000 UTC m=+0.045227724 container kill cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:09:52 localhost systemd[1]: libpod-cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138.scope: Deactivated successfully. Dec 5 05:09:52 localhost podman[316073]: 2025-12-05 10:09:52.227860667 +0000 UTC m=+0.045256536 container died cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:52 localhost podman[316073]: 2025-12-05 10:09:52.272294736 +0000 UTC m=+0.089690585 container cleanup cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:09:52 localhost systemd[1]: libpod-conmon-cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138.scope: Deactivated successfully. Dec 5 05:09:52 localhost podman[316075]: 2025-12-05 10:09:52.307882424 +0000 UTC m=+0.120801696 container remove cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:09:52 localhost nova_compute[280228]: 2025-12-05 10:09:52.357 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:09:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/64fc08e4-0b7c-4366-893e-01887e7442f6/.meta.tmp' Dec 5 05:09:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/64fc08e4-0b7c-4366-893e-01887e7442f6/.meta.tmp' to config b'/volumes/_nogroup/64fc08e4-0b7c-4366-893e-01887e7442f6/.meta' Dec 5 05:09:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "format": "json"}]: dispatch Dec 5 05:09:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:09:52 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:09:52 localhost systemd[1]: var-lib-containers-storage-overlay-e9e8b1c8dba2c84cd4d1bc0d9e4170f112cf5f93cf75ed34ec573c90944b8bcb-merged.mount: Deactivated successfully. Dec 5 05:09:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb8d3cbdb3e0d687913e3d7d09b5a3846941c3653f0ac7498a9abd3864387138-userdata-shm.mount: Deactivated successfully. Dec 5 05:09:53 localhost podman[316154]: Dec 5 05:09:53 localhost podman[316154]: 2025-12-05 10:09:53.194191804 +0000 UTC m=+0.089481868 container create 910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 5 05:09:53 localhost systemd[1]: Started libpod-conmon-910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e.scope. Dec 5 05:09:53 localhost podman[316154]: 2025-12-05 10:09:53.154987535 +0000 UTC m=+0.050277649 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:09:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 2.5 KiB/s wr, 0 op/s Dec 5 05:09:53 localhost systemd[1]: Started libcrun container. Dec 5 05:09:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a51f192f631f9f26066da2070a21652f68dfb89985f51f0ca22e28a240185897/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:09:53 localhost podman[316154]: 2025-12-05 10:09:53.277009026 +0000 UTC m=+0.172299130 container init 910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:53 localhost podman[316154]: 2025-12-05 10:09:53.287050534 +0000 UTC m=+0.182340658 container start 910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:09:53 localhost dnsmasq[316172]: started, version 2.85 cachesize 150 Dec 5 05:09:53 localhost dnsmasq[316172]: DNS service limited to local subnets Dec 5 05:09:53 localhost dnsmasq[316172]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:09:53 localhost dnsmasq[316172]: warning: no upstream servers configured Dec 5 05:09:53 localhost dnsmasq-dhcp[316172]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:09:53 localhost dnsmasq[316172]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/addn_hosts - 0 addresses Dec 5 05:09:53 localhost dnsmasq-dhcp[316172]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/host Dec 5 05:09:53 localhost dnsmasq-dhcp[316172]: read /var/lib/neutron/dhcp/202b91b7-8951-4676-ac11-d844ae9d9927/opts Dec 5 05:09:53 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:53.514 261902 INFO neutron.agent.dhcp.agent [None req-b85a4ec1-06f1-4d06-8413-196da077534d - - - - - -] DHCP configuration for ports {'63289619-3804-4940-b2c6-7d9b519bc720', '83bc4f64-fd82-46c7-b04c-f3f7fd3b951d'} is completed#033[00m Dec 5 05:09:53 localhost dnsmasq[316172]: exiting on receipt of SIGTERM Dec 5 05:09:53 localhost podman[316190]: 2025-12-05 10:09:53.616191471 +0000 UTC m=+0.051584228 container kill 910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:09:53 localhost systemd[1]: libpod-910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e.scope: Deactivated successfully. Dec 5 05:09:53 localhost podman[316206]: 2025-12-05 10:09:53.671460542 +0000 UTC m=+0.038488219 container died 910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:09:53 localhost systemd[1]: var-lib-containers-storage-overlay-a51f192f631f9f26066da2070a21652f68dfb89985f51f0ca22e28a240185897-merged.mount: Deactivated successfully. Dec 5 05:09:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e-userdata-shm.mount: Deactivated successfully. Dec 5 05:09:53 localhost podman[316206]: 2025-12-05 10:09:53.70999206 +0000 UTC m=+0.077019727 container remove 910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-202b91b7-8951-4676-ac11-d844ae9d9927, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:09:53 localhost ovn_controller[153000]: 2025-12-05T10:09:53Z|00243|binding|INFO|Releasing lport 63289619-3804-4940-b2c6-7d9b519bc720 from this chassis (sb_readonly=0) Dec 5 05:09:53 localhost ovn_controller[153000]: 2025-12-05T10:09:53Z|00244|binding|INFO|Setting lport 63289619-3804-4940-b2c6-7d9b519bc720 down in Southbound Dec 5 05:09:53 localhost kernel: device tap63289619-38 left promiscuous mode Dec 5 05:09:53 localhost nova_compute[280228]: 2025-12-05 10:09:53.778 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:53 localhost systemd[1]: libpod-conmon-910f922b644e1565b3bd33772e13d0f58a8ccea6f74b5b41f58ab401abdbef2e.scope: Deactivated successfully. Dec 5 05:09:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:53.791 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-202b91b7-8951-4676-ac11-d844ae9d9927', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a18433-2c66-4fbf-a647-567b3b059428, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=63289619-3804-4940-b2c6-7d9b519bc720) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:09:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:53.795 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 63289619-3804-4940-b2c6-7d9b519bc720 in datapath 202b91b7-8951-4676-ac11-d844ae9d9927 unbound from our chassis#033[00m Dec 5 05:09:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:53.798 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 202b91b7-8951-4676-ac11-d844ae9d9927, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:09:53 localhost nova_compute[280228]: 2025-12-05 10:09:53.798 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:09:53.800 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3630041c-2fd2-495c-8c77-360fae3f227e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:09:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:54.098 261902 INFO neutron.agent.dhcp.agent [None req-98901413-5924-40b3-a9a1-30a569ed4e18 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:54 localhost nova_compute[280228]: 2025-12-05 10:09:54.400 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:54.571 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:54 localhost systemd[1]: run-netns-qdhcp\x2d202b91b7\x2d8951\x2d4676\x2dac11\x2dd844ae9d9927.mount: Deactivated successfully. Dec 5 05:09:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 2.5 KiB/s wr, 0 op/s Dec 5 05:09:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:09:56 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:56.435 2 INFO neutron.agent.securitygroups_rpc [None req-399fedc7-2e19-4b21-a4fb-b5420b7d54a3 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:56 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:56.525 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:09:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "new_size": 2147483648, "format": "json"}]: dispatch Dec 5 05:09:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:56 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 1 addresses Dec 5 05:09:56 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:56 localhost podman[316250]: 2025-12-05 10:09:56.695103356 +0000 UTC m=+0.064973318 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:09:56 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:57 localhost openstack_network_exporter[241668]: ERROR 10:09:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:09:57 localhost openstack_network_exporter[241668]: ERROR 10:09:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:09:57 localhost openstack_network_exporter[241668]: ERROR 10:09:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:09:57 localhost openstack_network_exporter[241668]: ERROR 10:09:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:09:57 localhost openstack_network_exporter[241668]: Dec 5 05:09:57 localhost openstack_network_exporter[241668]: ERROR 10:09:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:09:57 localhost openstack_network_exporter[241668]: Dec 5 05:09:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 4.4 KiB/s wr, 1 op/s Dec 5 05:09:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "format": "json"}]: dispatch Dec 5 05:09:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:64fc08e4-0b7c-4366-893e-01887e7442f6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:09:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:64fc08e4-0b7c-4366-893e-01887e7442f6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:09:57 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '64fc08e4-0b7c-4366-893e-01887e7442f6' of type subvolume Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.347+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '64fc08e4-0b7c-4366-893e-01887e7442f6' of type subvolume Dec 5 05:09:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "64fc08e4-0b7c-4366-893e-01887e7442f6", "force": true, "format": "json"}]: dispatch Dec 5 05:09:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:57 localhost nova_compute[280228]: 2025-12-05 10:09:57.358 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/64fc08e4-0b7c-4366-893e-01887e7442f6'' moved to trashcan Dec 5 05:09:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:09:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:64fc08e4-0b7c-4366-893e-01887e7442f6, vol_name:cephfs) < "" Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.370+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.370+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.370+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.370+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.370+0000 7f997083d640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.402+0000 7f997103e640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.402+0000 7f997103e640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.402+0000 7f997103e640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.402+0000 7f997103e640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:09:57.402+0000 7f997103e640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:09:57 localhost ovn_controller[153000]: 2025-12-05T10:09:57Z|00245|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:09:57 localhost nova_compute[280228]: 2025-12-05 10:09:57.458 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:57 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:57.552 2 INFO neutron.agent.securitygroups_rpc [None req-53ce10df-a7a5-49c1-9080-2e2f4825bd95 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:57.607 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9b5d6336-4406-42c4-89a7-61ec5d96ac79, ip_allocation=immediate, mac_address=fa:16:3e:43:86:5b, name=tempest-AllowedAddressPairIpV6TestJSON-1751132643, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:37Z, description=, dns_domain=, id=8a407ded-58d9-4221-8346-d297c9244eac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-812711989, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1853, status=ACTIVE, subnets=['dc4232c0-b0b9-4249-af2b-cb60847d53c2'], tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=8a407ded-58d9-4221-8346-d297c9244eac, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['01225a20-d932-440b-b264-e7c785b61e12'], standard_attr_id=1927, status=DOWN, tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:57Z on network 8a407ded-58d9-4221-8346-d297c9244eac#033[00m Dec 5 05:09:57 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 2 addresses Dec 5 05:09:57 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:57 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:57 localhost podman[316314]: 2025-12-05 10:09:57.810482722 +0000 UTC m=+0.066382461 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:09:58 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:58.910 261902 INFO neutron.agent.dhcp.agent [None req-4cc8de3a-6b77-4f5c-977b-ea9a52cca343 - - - - - -] DHCP configuration for ports {'9b5d6336-4406-42c4-89a7-61ec5d96ac79'} is completed#033[00m Dec 5 05:09:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s wr, 1 op/s Dec 5 05:09:59 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:59.281 2 INFO neutron.agent.securitygroups_rpc [None req-1271ce57-15dc-48ec-9676-311c69f911c2 b1808a8aad0b4360880c390bd8362a00 74a17deba84b470ebe240fab8c99b64c - - default default] Security group member updated ['f197cddb-ae8c-4586-ad83-8ebb4f64e04c']#033[00m Dec 5 05:09:59 localhost nova_compute[280228]: 2025-12-05 10:09:59.402 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:09:59 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e50: np0005546419.zhsnqq(active, since 8m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:09:59 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:59.651 2 INFO neutron.agent.securitygroups_rpc [None req-c144ec7e-ee0e-4dfd-b220-2497e51dbdcc 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:09:59 localhost neutron_sriov_agent[254996]: 2025-12-05 10:09:59.665 2 INFO neutron.agent.securitygroups_rpc [None req-aea036ab-5bba-4602-a174-e43a3c6f9829 71a1dc47c8964743be7069896a3eb55e f6db8cbac53645ef9430332056699027 - - default default] Security group member updated ['007023a4-4e47-4b10-af03-71f50f200b06']#033[00m Dec 5 05:09:59 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:09:59.743 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:09:59Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=31f4f0cf-a9a8-47cf-a2f6-3804b5e37aa8, ip_allocation=immediate, mac_address=fa:16:3e:cd:f1:a5, name=tempest-AllowedAddressPairIpV6TestJSON-1542517629, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:37Z, description=, dns_domain=, id=8a407ded-58d9-4221-8346-d297c9244eac, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-812711989, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38052, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1853, status=ACTIVE, subnets=['dc4232c0-b0b9-4249-af2b-cb60847d53c2'], tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:39Z, vlan_transparent=None, network_id=8a407ded-58d9-4221-8346-d297c9244eac, port_security_enabled=True, project_id=e4fe9762324442459e16cd8ca78e7d20, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['01225a20-d932-440b-b264-e7c785b61e12'], standard_attr_id=1930, status=DOWN, tags=[], tenant_id=e4fe9762324442459e16cd8ca78e7d20, updated_at=2025-12-05T10:09:59Z on network 8a407ded-58d9-4221-8346-d297c9244eac#033[00m Dec 5 05:09:59 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 3 addresses Dec 5 05:09:59 localhost podman[316353]: 2025-12-05 10:09:59.960296408 +0000 UTC m=+0.082699121 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:09:59 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:09:59 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:09:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:09:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:10:00 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 5 05:10:00 localhost systemd[1]: tmp-crun.syipvR.mount: Deactivated successfully. Dec 5 05:10:00 localhost podman[316368]: 2025-12-05 10:10:00.096023729 +0000 UTC m=+0.098264676 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Dec 5 05:10:00 localhost podman[316368]: 2025-12-05 10:10:00.11171839 +0000 UTC m=+0.113959297 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git) Dec 5 05:10:00 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:10:00 localhost podman[316367]: 2025-12-05 10:10:00.064297009 +0000 UTC m=+0.073954862 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd) Dec 5 05:10:00 localhost podman[316367]: 2025-12-05 10:10:00.194847043 +0000 UTC m=+0.204504976 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 5 05:10:00 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:10:00 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:00.280 261902 INFO neutron.agent.dhcp.agent [None req-6c576469-3123-4442-a5c2-49d1dfc5385f - - - - - -] DHCP configuration for ports {'31f4f0cf-a9a8-47cf-a2f6-3804b5e37aa8'} is completed#033[00m Dec 5 05:10:00 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 05:10:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:10:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1162ea5e-f099-4d47-a2b8-fe2def6b3849/.meta.tmp' Dec 5 05:10:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1162ea5e-f099-4d47-a2b8-fe2def6b3849/.meta.tmp' to config b'/volumes/_nogroup/1162ea5e-f099-4d47-a2b8-fe2def6b3849/.meta' Dec 5 05:10:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "format": "json"}]: dispatch Dec 5 05:10:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:10:00 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:10:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 10 KiB/s wr, 3 op/s Dec 5 05:10:01 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:01.927 261902 INFO neutron.agent.linux.ip_lib [None req-ea9bad69-5cd2-4cb4-90ab-a28edc8657e2 - - - - - -] Device tap8e5c4e09-a8 cannot be used as it has no MAC address#033[00m Dec 5 05:10:01 localhost nova_compute[280228]: 2025-12-05 10:10:01.957 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:01 localhost kernel: device tap8e5c4e09-a8 entered promiscuous mode Dec 5 05:10:01 localhost NetworkManager[5960]: [1764929401.9666] manager: (tap8e5c4e09-a8): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Dec 5 05:10:01 localhost ovn_controller[153000]: 2025-12-05T10:10:01Z|00246|binding|INFO|Claiming lport 8e5c4e09-a830-46d8-881b-65b60252eb0c for this chassis. Dec 5 05:10:01 localhost ovn_controller[153000]: 2025-12-05T10:10:01Z|00247|binding|INFO|8e5c4e09-a830-46d8-881b-65b60252eb0c: Claiming unknown Dec 5 05:10:01 localhost nova_compute[280228]: 2025-12-05 10:10:01.966 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:01 localhost systemd-udevd[316420]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost ovn_controller[153000]: 2025-12-05T10:10:02Z|00248|binding|INFO|Setting lport 8e5c4e09-a830-46d8-881b-65b60252eb0c ovn-installed in OVS Dec 5 05:10:02 localhost nova_compute[280228]: 2025-12-05 10:10:02.008 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost journal[228791]: ethtool ioctl error on tap8e5c4e09-a8: No such device Dec 5 05:10:02 localhost nova_compute[280228]: 2025-12-05 10:10:02.047 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:02 localhost nova_compute[280228]: 2025-12-05 10:10:02.079 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:02 localhost ovn_controller[153000]: 2025-12-05T10:10:02Z|00249|binding|INFO|Setting lport 8e5c4e09-a830-46d8-881b-65b60252eb0c up in Southbound Dec 5 05:10:02 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:02.081 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-8c66c75b-13c5-43bd-8a09-f8aee55be1d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c66c75b-13c5-43bd-8a09-f8aee55be1d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0c952fa-7956-4e3f-99fc-e79eedfd0afc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8e5c4e09-a830-46d8-881b-65b60252eb0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:02 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:02.083 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 8e5c4e09-a830-46d8-881b-65b60252eb0c in datapath 8c66c75b-13c5-43bd-8a09-f8aee55be1d6 bound to our chassis#033[00m Dec 5 05:10:02 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:02.085 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8c66c75b-13c5-43bd-8a09-f8aee55be1d6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:10:02 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:02.086 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[66f35d28-3b2a-4db7-88d3-2fb4dc363a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:02 localhost nova_compute[280228]: 2025-12-05 10:10:02.361 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:02 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:02.432 2 INFO neutron.agent.securitygroups_rpc [None req-f945fec7-1421-4326-98f0-4d78b816214c 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:10:02 localhost podman[316484]: 2025-12-05 10:10:02.671892207 +0000 UTC m=+0.065994490 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 5 05:10:02 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 2 addresses Dec 5 05:10:02 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:10:02 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:10:03 localhost podman[316568]: Dec 5 05:10:03 localhost podman[316568]: 2025-12-05 10:10:03.071372195 +0000 UTC m=+0.094301444 container create 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 5 05:10:03 localhost systemd[1]: Started libpod-conmon-3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e.scope. Dec 5 05:10:03 localhost podman[316568]: 2025-12-05 10:10:03.021430139 +0000 UTC m=+0.044359468 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:10:03 localhost systemd[1]: Started libcrun container. Dec 5 05:10:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ed70f24fb630c1a63d7962ad41624a5a53bd2a25bedea42fec36291b7944c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:10:03 localhost podman[316568]: 2025-12-05 10:10:03.149171315 +0000 UTC m=+0.172100584 container init 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:10:03 localhost podman[316568]: 2025-12-05 10:10:03.159373568 +0000 UTC m=+0.182302837 container start 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:10:03 localhost dnsmasq[316600]: started, version 2.85 cachesize 150 Dec 5 05:10:03 localhost dnsmasq[316600]: DNS service limited to local subnets Dec 5 05:10:03 localhost dnsmasq[316600]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:10:03 localhost dnsmasq[316600]: warning: no upstream servers configured Dec 5 05:10:03 localhost dnsmasq-dhcp[316600]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:10:03 localhost dnsmasq[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/addn_hosts - 0 addresses Dec 5 05:10:03 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/host Dec 5 05:10:03 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/opts Dec 5 05:10:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 3 op/s Dec 5 05:10:03 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:03.389 261902 INFO neutron.agent.dhcp.agent [None req-4da88303-eae2-493a-ab8f-276ea3fa7ccb - - - - - -] DHCP configuration for ports {'f091250d-5892-44cf-985e-a6d8236a0f43'} is completed#033[00m Dec 5 05:10:03 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:03.407 2 INFO neutron.agent.securitygroups_rpc [None req-cdcb0b2b-f06b-4051-a380-4fb87451cfb7 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:10:03 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 1 addresses Dec 5 05:10:03 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:10:03 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:10:03 localhost podman[316640]: 2025-12-05 10:10:03.606142572 +0000 UTC m=+0.050224287 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:10:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:10:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:10:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:10:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:10:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:10:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:10:03 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 80412ad6-a57b-49f1-9611-b113402e88b5 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:10:03 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 80412ad6-a57b-49f1-9611-b113402e88b5 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:10:03 localhost ceph-mgr[286454]: [progress INFO root] Completed event 80412ad6-a57b-49f1-9611-b113402e88b5 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:10:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:10:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:10:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:03.916 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:10:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:03.917 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:10:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:03.918 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:10:04 localhost systemd[1]: tmp-crun.wUKgNM.mount: Deactivated successfully. Dec 5 05:10:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "new_size": 2147483648, "format": "json"}]: dispatch Dec 5 05:10:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:04 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:04.377 2 INFO neutron.agent.securitygroups_rpc [None req-79e664b4-556e-437e-b1e4-1ebb8a566d0d 604f7df4a8ed42ed9f33785bc35c336b e4fe9762324442459e16cd8ca78e7d20 - - default default] Security group member updated ['01225a20-d932-440b-b264-e7c785b61e12']#033[00m Dec 5 05:10:04 localhost nova_compute[280228]: 2025-12-05 10:10:04.435 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:04 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:10:04 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:10:04 localhost podman[316695]: 2025-12-05 10:10:04.61500529 +0000 UTC m=+0.057420296 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:10:04 localhost dnsmasq[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/addn_hosts - 0 addresses Dec 5 05:10:04 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/host Dec 5 05:10:04 localhost dnsmasq-dhcp[315541]: read /var/lib/neutron/dhcp/8a407ded-58d9-4221-8346-d297c9244eac/opts Dec 5 05:10:04 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:04.992 2 INFO neutron.agent.securitygroups_rpc [None req-c4286b11-826c-4b9b-b924-7f63ea6cc9ba 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:10:04 localhost ovn_controller[153000]: 2025-12-05T10:10:04Z|00250|binding|INFO|Removing iface tape95ff63a-aa ovn-installed in OVS Dec 5 05:10:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:04.997 158820 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 03f0e0f9-245b-4bfc-bd0b-c0fc3ee75451 with type ""#033[00m Dec 5 05:10:04 localhost ovn_controller[153000]: 2025-12-05T10:10:04Z|00251|binding|INFO|Removing lport e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a ovn-installed in OVS Dec 5 05:10:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:04.999 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-8a407ded-58d9-4221-8346-d297c9244eac', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a407ded-58d9-4221-8346-d297c9244eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e4fe9762324442459e16cd8ca78e7d20', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cdd725c-368f-47c1-9779-ad865a815712, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:05 localhost nova_compute[280228]: 2025-12-05 10:10:04.999 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:05.002 158820 INFO neutron.agent.ovn.metadata.agent [-] Port e95ff63a-aad0-4fe5-b1cf-d625ac34ec0a in datapath 8a407ded-58d9-4221-8346-d297c9244eac unbound from our chassis#033[00m Dec 5 05:10:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:05.004 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a407ded-58d9-4221-8346-d297c9244eac or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:10:05 localhost nova_compute[280228]: 2025-12-05 10:10:05.005 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:05.005 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[18888f65-74f3-46de-8922-fbcd5fe1c979]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:05 localhost dnsmasq[315541]: exiting on receipt of SIGTERM Dec 5 05:10:05 localhost podman[316730]: 2025-12-05 10:10:05.026788666 +0000 UTC m=+0.062162953 container kill 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:10:05 localhost systemd[1]: libpod-0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843.scope: Deactivated successfully. Dec 5 05:10:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:05.091 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=69254131-7c99-4dbf-94b7-3b4173c7a2f0, ip_allocation=immediate, mac_address=fa:16:3e:b3:52:0e, name=tempest-PortsTestJSON-402450372, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:58Z, description=, dns_domain=, id=8c66c75b-13c5-43bd-8a09-f8aee55be1d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-579240134, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31355, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1928, status=ACTIVE, subnets=['e7784290-9e4f-4e5f-9c06-1e87f939cabb'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:00Z, vlan_transparent=None, network_id=8c66c75b-13c5-43bd-8a09-f8aee55be1d6, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=1950, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:04Z on network 8c66c75b-13c5-43bd-8a09-f8aee55be1d6#033[00m Dec 5 05:10:05 localhost podman[316742]: 2025-12-05 10:10:05.09720102 +0000 UTC m=+0.058623474 container died 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:10:05 localhost systemd[1]: tmp-crun.Fs94wJ.mount: Deactivated successfully. Dec 5 05:10:05 localhost systemd[1]: tmp-crun.MUJBJC.mount: Deactivated successfully. Dec 5 05:10:05 localhost podman[316742]: 2025-12-05 10:10:05.136306646 +0000 UTC m=+0.097729020 container cleanup 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:10:05 localhost systemd[1]: libpod-conmon-0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843.scope: Deactivated successfully. Dec 5 05:10:05 localhost podman[316744]: 2025-12-05 10:10:05.178009651 +0000 UTC m=+0.131011687 container remove 0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a407ded-58d9-4221-8346-d297c9244eac, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:10:05 localhost nova_compute[280228]: 2025-12-05 10:10:05.186 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:05 localhost kernel: device tape95ff63a-aa left promiscuous mode Dec 5 05:10:05 localhost nova_compute[280228]: 2025-12-05 10:10:05.198 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:05.221 261902 INFO neutron.agent.dhcp.agent [None req-f0b16c22-6261-4176-b95a-cdd2a0b5bd10 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:05 localhost podman[316789]: 2025-12-05 10:10:05.255334396 +0000 UTC m=+0.043675537 container kill 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:10:05 localhost dnsmasq[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/addn_hosts - 1 addresses Dec 5 05:10:05 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/host Dec 5 05:10:05 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/opts Dec 5 05:10:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 3 op/s Dec 5 05:10:05 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:10:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:10:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:10:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:05.454 261902 INFO neutron.agent.dhcp.agent [None req-c8ce8ba0-11fb-421e-bb68-42e441155729 - - - - - -] DHCP configuration for ports {'69254131-7c99-4dbf-94b7-3b4173c7a2f0'} is completed#033[00m Dec 5 05:10:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:10:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:05.486 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:05 localhost ovn_controller[153000]: 2025-12-05T10:10:05Z|00252|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:10:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:05 localhost nova_compute[280228]: 2025-12-05 10:10:05.709 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:06 localhost systemd[1]: var-lib-containers-storage-overlay-d8d5276eb3381dad37e42f81db0a34eb6d80de189ac087f3677b12c2058dce2f-merged.mount: Deactivated successfully. Dec 5 05:10:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e67424846189a9ae919c4a3338a63eca93a210e09a7f4c77185e2918c622843-userdata-shm.mount: Deactivated successfully. Dec 5 05:10:06 localhost systemd[1]: run-netns-qdhcp\x2d8a407ded\x2d58d9\x2d4221\x2d8346\x2dd297c9244eac.mount: Deactivated successfully. Dec 5 05:10:06 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:06.317 2 INFO neutron.agent.securitygroups_rpc [None req-aed78872-6dc7-4af4-91c3-35a3e4865f15 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:10:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:06.365 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:06Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=69355d33-d1be-4d8b-9021-f780fd9a4fe6, ip_allocation=immediate, mac_address=fa:16:3e:da:4c:0e, name=tempest-PortsTestJSON-286504924, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:09:58Z, description=, dns_domain=, id=8c66c75b-13c5-43bd-8a09-f8aee55be1d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-579240134, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31355, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1928, status=ACTIVE, subnets=['e7784290-9e4f-4e5f-9c06-1e87f939cabb'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:00Z, vlan_transparent=None, network_id=8c66c75b-13c5-43bd-8a09-f8aee55be1d6, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=1953, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:06Z on network 8c66c75b-13c5-43bd-8a09-f8aee55be1d6#033[00m Dec 5 05:10:06 localhost dnsmasq[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/addn_hosts - 2 addresses Dec 5 05:10:06 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/host Dec 5 05:10:06 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/opts Dec 5 05:10:06 localhost podman[316825]: 2025-12-05 10:10:06.690789312 +0000 UTC m=+0.061066598 container kill 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:10:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:06.969 261902 INFO neutron.agent.dhcp.agent [None req-81ccdbc8-5266-43dd-b5ce-e7606a6a9737 - - - - - -] DHCP configuration for ports {'69355d33-d1be-4d8b-9021-f780fd9a4fe6'} is completed#033[00m Dec 5 05:10:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 9.4 KiB/s wr, 3 op/s Dec 5 05:10:07 localhost nova_compute[280228]: 2025-12-05 10:10:07.400 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "format": "json"}]: dispatch Dec 5 05:10:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:10:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:10:07 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1162ea5e-f099-4d47-a2b8-fe2def6b3849' of type subvolume Dec 5 05:10:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:10:07.775+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1162ea5e-f099-4d47-a2b8-fe2def6b3849' of type subvolume Dec 5 05:10:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1162ea5e-f099-4d47-a2b8-fe2def6b3849", "force": true, "format": "json"}]: dispatch Dec 5 05:10:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1162ea5e-f099-4d47-a2b8-fe2def6b3849'' moved to trashcan Dec 5 05:10:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:10:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1162ea5e-f099-4d47-a2b8-fe2def6b3849, vol_name:cephfs) < "" Dec 5 05:10:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:07.861 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:07.863 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:07.866 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:07.867 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d279903f-3af3-4310-b687-060ebf9a6a0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:08 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:08.569 2 INFO neutron.agent.securitygroups_rpc [None req-258a12e6-efff-47e2-a44d-bc0a6db9eb2b 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:10:08 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:08.810 2 INFO neutron.agent.securitygroups_rpc [None req-cb726af7-74e7-48fe-b89d-484b3da12128 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:08 localhost dnsmasq[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/addn_hosts - 1 addresses Dec 5 05:10:08 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/host Dec 5 05:10:08 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/opts Dec 5 05:10:08 localhost podman[316864]: 2025-12-05 10:10:08.840313931 +0000 UTC m=+0.059146630 container kill 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:10:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 2 op/s Dec 5 05:10:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:09.285 2 INFO neutron.agent.securitygroups_rpc [None req-07bf7634-d21a-495b-aa67-caa041a79950 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:10:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:09.339 2 INFO neutron.agent.securitygroups_rpc [None req-d5cbd288-2520-4e7e-92e0-afbba81cd445 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:09 localhost nova_compute[280228]: 2025-12-05 10:10:09.470 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:09 localhost dnsmasq[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/addn_hosts - 0 addresses Dec 5 05:10:09 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/host Dec 5 05:10:09 localhost podman[316902]: 2025-12-05 10:10:09.572990731 +0000 UTC m=+0.055591792 container kill 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:10:09 localhost dnsmasq-dhcp[316600]: read /var/lib/neutron/dhcp/8c66c75b-13c5-43bd-8a09-f8aee55be1d6/opts Dec 5 05:10:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:10 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:10:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4ffb057-c627-44bd-a232-379e570c56f8, vol_name:cephfs) < "" Dec 5 05:10:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d4ffb057-c627-44bd-a232-379e570c56f8/.meta.tmp' Dec 5 05:10:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d4ffb057-c627-44bd-a232-379e570c56f8/.meta.tmp' to config b'/volumes/_nogroup/d4ffb057-c627-44bd-a232-379e570c56f8/.meta' Dec 5 05:10:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4ffb057-c627-44bd-a232-379e570c56f8, vol_name:cephfs) < "" Dec 5 05:10:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "format": "json"}]: dispatch Dec 5 05:10:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4ffb057-c627-44bd-a232-379e570c56f8, vol_name:cephfs) < "" Dec 5 05:10:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4ffb057-c627-44bd-a232-379e570c56f8, vol_name:cephfs) < "" Dec 5 05:10:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:10:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:10:11 localhost systemd[1]: tmp-crun.YtJVaT.mount: Deactivated successfully. Dec 5 05:10:11 localhost dnsmasq[316600]: exiting on receipt of SIGTERM Dec 5 05:10:11 localhost podman[316938]: 2025-12-05 10:10:11.087863127 +0000 UTC m=+0.060278735 container kill 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:10:11 localhost systemd[1]: libpod-3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e.scope: Deactivated successfully. Dec 5 05:10:11 localhost podman[316954]: 2025-12-05 10:10:11.153348849 +0000 UTC m=+0.048328449 container died 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:10:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e-userdata-shm.mount: Deactivated successfully. Dec 5 05:10:11 localhost podman[316954]: 2025-12-05 10:10:11.196064757 +0000 UTC m=+0.091044357 container remove 3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8c66c75b-13c5-43bd-8a09-f8aee55be1d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:10:11 localhost ovn_controller[153000]: 2025-12-05T10:10:11Z|00253|binding|INFO|Releasing lport 8e5c4e09-a830-46d8-881b-65b60252eb0c from this chassis (sb_readonly=0) Dec 5 05:10:11 localhost kernel: device tap8e5c4e09-a8 left promiscuous mode Dec 5 05:10:11 localhost nova_compute[280228]: 2025-12-05 10:10:11.208 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:11 localhost ovn_controller[153000]: 2025-12-05T10:10:11Z|00254|binding|INFO|Setting lport 8e5c4e09-a830-46d8-881b-65b60252eb0c down in Southbound Dec 5 05:10:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:11.218 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-8c66c75b-13c5-43bd-8a09-f8aee55be1d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c66c75b-13c5-43bd-8a09-f8aee55be1d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0c952fa-7956-4e3f-99fc-e79eedfd0afc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8e5c4e09-a830-46d8-881b-65b60252eb0c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:11.220 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 8e5c4e09-a830-46d8-881b-65b60252eb0c in datapath 8c66c75b-13c5-43bd-8a09-f8aee55be1d6 unbound from our chassis#033[00m Dec 5 05:10:11 localhost systemd[1]: libpod-conmon-3b75c38ccdae59dced702dc59b950d3ae6e62842ea2ce77470bf3aa0f66d899e.scope: Deactivated successfully. Dec 5 05:10:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:11.223 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c66c75b-13c5-43bd-8a09-f8aee55be1d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:11 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:11.224 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[837369a3-84c3-44e9-ae4e-9bc2343f37ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:11 localhost nova_compute[280228]: 2025-12-05 10:10:11.228 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 13 KiB/s wr, 5 op/s Dec 5 05:10:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:11.491 261902 INFO neutron.agent.dhcp.agent [None req-88252701-f09d-4368-90e1-8ba54d566ade - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:11.492 261902 INFO neutron.agent.dhcp.agent [None req-88252701-f09d-4368-90e1-8ba54d566ade - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:11.754 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:12 localhost systemd[1]: var-lib-containers-storage-overlay-b3ed70f24fb630c1a63d7962ad41624a5a53bd2a25bedea42fec36291b7944c0-merged.mount: Deactivated successfully. Dec 5 05:10:12 localhost systemd[1]: run-netns-qdhcp\x2d8c66c75b\x2d13c5\x2d43bd\x2d8a09\x2df8aee55be1d6.mount: Deactivated successfully. Dec 5 05:10:12 localhost nova_compute[280228]: 2025-12-05 10:10:12.428 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:12 localhost ovn_controller[153000]: 2025-12-05T10:10:12Z|00255|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:10:12 localhost nova_compute[280228]: 2025-12-05 10:10:12.570 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e141 do_prune osdmap full prune enabled Dec 5 05:10:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e142 e142: 6 total, 6 up, 6 in Dec 5 05:10:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e142: 6 total, 6 up, 6 in Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.952 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.952 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.972 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 16920000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '28f790cc-293a-4031-bb6d-57947bf6885d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16920000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:10:12.953148', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '96a4d356-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.146276171, 'message_signature': '260565e52d0f54b194d056c31e6279e8e4bc5d722dabc1aa64314062de12dfe9'}]}, 'timestamp': '2025-12-05 10:10:12.973382', '_unique_id': '113bbad8497041968e3190be4a4b4b1a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.974 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:10:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.976 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:12.999 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e6436dba-5dfa-48da-8a32-79af89ea8cca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:12.976926', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96a8f896-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '536174fd1ca5e3d4c474a00e23d585fc90181e0432018baf9d9a123a1163f9b1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:12.976926', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96a91394-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '3a91cb3eb1aacc357e3e9dc38e9f0396370462bdd1925a7af4bc732543cf6134'}]}, 'timestamp': '2025-12-05 10:10:13.001218', '_unique_id': '3e24b509f8c64373a5758555ca34544c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.004 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.008 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '53a2f6cb-e2dc-439c-843b-d2e60afbebe5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.004623', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96aa470a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '62ebbd724cfedfbc95f8cb00b2afc8995cfad3326bb7ebdf33d59f94d7232c40'}]}, 'timestamp': '2025-12-05 10:10:13.009125', '_unique_id': '0611cb55ef6a47b9988468c4915800df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.012 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.012 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.012 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6c4c62e7-7bbd-4d12-ac91-db824c93a2f8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.012653', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96aaec28-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': 'aac41247479301a15279f545c3a45dc85f9c290f0fde3ec778c5fa57eecc4274'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.012653', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96ab06ea-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '842a040c82c56cf49ea3b7e43a92685e90a4014bc65dcc7da239a5165bf3c2a8'}]}, 'timestamp': '2025-12-05 10:10:13.013997', '_unique_id': '1a9795da13db42a4bf6abe358aa693f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.017 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.017 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b3cfae20-7cbd-4b5e-893d-626bb1b6a47f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.017285', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96aba276-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '82e21e70081e9241404a5372e88e42b3707e28f42e4cbcc9ccaafdb102dabe69'}]}, 'timestamp': '2025-12-05 10:10:13.018025', '_unique_id': '39165259f6e14ae99ce765f299ae7c3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.021 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.021 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.021 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f2a80916-000f-4705-8e1d-ee9fc925f8d9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.021231', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96ac3d26-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '435fcb4c153ff0c362d41268bf59b59c1e7087f253f29017eef8af29c664382b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.021231', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96ac56e4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': 'e2a6699ad4014d96f42fa1ef32199313d1422573d974f1b5c5b46a75b2b7f9b5'}]}, 'timestamp': '2025-12-05 10:10:13.022600', '_unique_id': 'bf83c197783b4ca0add6a29c477373cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.025 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ab63a84f-d404-4bb8-97b5-01819ef7717c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.025868', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96acf0a4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '9d97c3c53f4329f464466bbd06393482e02f4d0b78178972862a1060b3690d70'}]}, 'timestamp': '2025-12-05 10:10:13.026604', '_unique_id': 'ea5a865db9714258af37c54b6509c519'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.029 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d19fd79-49e6-4852-9e5c-2da0bb06103f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.029778', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96ad8960-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '0eaf4a30c3a1899983f703c4df8456c5b78400586c07f5c559d9d502aa1088df'}]}, 'timestamp': '2025-12-05 10:10:13.030510', '_unique_id': 'edf6f937910e4091977def8056feb3e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.033 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c1d9d988-cc02-4061-92c9-d4cc7480f98a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.033887', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96ae29ba-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '664bd5df683ff6bf1ab4310c33fac4ea852afb57832a985e5cf3df10367afdea'}]}, 'timestamp': '2025-12-05 10:10:13.034619', '_unique_id': '3e9db08113394ce8b7d097f1d64970af'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.037 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f0cd4313-feb5-4b05-ba94-630a49482a3b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.037388', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96aeae44-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '344d54668e92e0839e75d3589b698cdf9882548b1ce9d03f807e482b34bb029b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.037388', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96aebf24-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': 'e989a20fa821efac3bdd1a57ee3219b52e7c3c437c9a4fad39c8979a5af6643c'}]}, 'timestamp': '2025-12-05 10:10:13.038294', '_unique_id': '75dd851351db446a8a0577541ee64022'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.040 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.040 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '491fec20-9733-48b0-88f3-b75a51f58e8d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.040592', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96af2b30-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '97928480654a68345f466e93ed10c34a91bf685d19e7a2f030e5171ebb69efd7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.040592', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96af3cba-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '56b5484dc58cd753a6e6480578ce95650bf9cbc8207521d8c97a4216c4c8207b'}]}, 'timestamp': '2025-12-05 10:10:13.041484', '_unique_id': '8e8278dc3b4a4706aebbbcd8399ce1dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.042 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.043 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dfad02e0-7476-4e3c-8073-7733311af23a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:10:13.043928', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '96afadc6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.146276171, 'message_signature': '5a21d0a3e3e43d891e227402d131f245d59bdbec14682ae1073717992c7ec3c7'}]}, 'timestamp': '2025-12-05 10:10:13.044326', '_unique_id': '2092910ba7c64775b8f1ccfc45ae9053'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b14f024c-287b-40ac-bf61-4ae839bc6157', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.045937', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96aff998-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '304ce320fc5b781d75f0ea72a2182c296071d22e8b0f08bb97b6848fac853ee8'}]}, 'timestamp': '2025-12-05 10:10:13.046274', '_unique_id': '2fee644678334e158f41fac568a46ad4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.046 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7cc3d4d8-ca89-41fa-988a-6259c1eeecb2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.047659', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96b197da-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.221961926, 'message_signature': '2e3d286171803ec7524453f28af0530a7fa2407b943ebc8cef3cbaa4ea57a573'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.047659', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96b1a2ca-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.221961926, 'message_signature': '948140626e6ae47c05958eab31e10d4ee5b0dee545b48b12dd2a6e838c8aceff'}]}, 'timestamp': '2025-12-05 10:10:13.057190', '_unique_id': '0b5f09c4f1d6438084f14ff6d4b71394'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.058 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.058 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51902895-d838-40db-844d-a2b8c1ad519e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.058883', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96b1f4f0-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '42b3597f2229bf7a1e40c17fce8065a263c8028925796c74bd15452ed8ef595a'}]}, 'timestamp': '2025-12-05 10:10:13.059338', '_unique_id': '6814fe46fd9648f1b8509e7d469d875f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.061 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4680c5da-67b9-49f4-97ed-1a92fddc24d2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.061309', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96b25486-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.221961926, 'message_signature': '116ad18e9444940aab1adf4c72721aead98553964cfe7b3ffb6b69646abd17ad'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.061309', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96b26458-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.221961926, 'message_signature': '9ee2e98f28ec8f10f90549193a86444eec721082d73b9d099f1233fd45b48a06'}]}, 'timestamp': '2025-12-05 10:10:13.062146', '_unique_id': '7814b22c594e4c8bb270fbdccfb59534'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.064 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '497aa860-7b04-4d4c-90cb-e1bf9bd4abe4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.064305', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96b2c8e4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': '8442c508521f14e0b6fae35fbc92347c7f28a332045b705b75ce02ae0f57a52e'}]}, 'timestamp': '2025-12-05 10:10:13.064737', '_unique_id': '5adeb3a08f764c9a880a7e494cbd4bcd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1573a505-40f0-4a0a-ae01-890b975d0780', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.066672', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96b324c4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.221961926, 'message_signature': '6e5e08f216441cbb1c29039e91c0d604d04896ae7eea3c26f9415eab9aee89e0'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.066672', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96b33496-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.221961926, 'message_signature': '1d14653d675df042118112fbe30346d42efe19917a9baa55f4dc6067a30c1efd'}]}, 'timestamp': '2025-12-05 10:10:13.067473', '_unique_id': 'b4001b1da69644b58dd64c2f28d0d930'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.069 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7322412f-e9e1-4ee5-a3de-118936879c28', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.069451', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96b39198-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': 'a2ca69b01595f0ce0fd8eb187b947f2c2d6246b1b6b7636013fef8ea12dd5d92'}]}, 'timestamp': '2025-12-05 10:10:13.069874', '_unique_id': '14492ebe18c44851b40e293d31b0402b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.071 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7c7ee2c0-82ef-4109-b9c9-41fc0d804726', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:10:13.071931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '96b3f214-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '8fde599dcd2541a7ca7e5acd607adadd595d7161f016a4faeae39cf4320be7d5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:10:13.071931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '96b4022c-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.151259402, 'message_signature': '4167066ed5376da7f45246ab85efab62c449d543722a25574652538f7ec53969'}]}, 'timestamp': '2025-12-05 10:10:13.072897', '_unique_id': 'e3dc020cb58f4e1bb40dd6799714e83e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.074 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '967b70f9-3c09-44fb-a3bc-3e3445d99229', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:10:13.074862', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '96b46500-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12528.17895434, 'message_signature': 'ff8de09cd7e72b44ee25800a790241244d027c0490de63b06f9d857f57313b26'}]}, 'timestamp': '2025-12-05 10:10:13.075307', '_unique_id': '1f367fd7414b4d899442290ac9c40162'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:10:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:10:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:10:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 8.0 KiB/s wr, 4 op/s Dec 5 05:10:13 localhost nova_compute[280228]: 2025-12-05 10:10:13.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:13 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:13.549 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:13 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:13.550 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:13 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:13.553 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:13 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:13.554 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[0c07fbd1-edb2-4aea-9364-9b7ab3991099]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:13 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:13.830 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:14 localhost nova_compute[280228]: 2025-12-05 10:10:14.106 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:10:14 localhost nova_compute[280228]: 2025-12-05 10:10:14.141 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 05:10:14 localhost nova_compute[280228]: 2025-12-05 10:10:14.143 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:10:14 localhost nova_compute[280228]: 2025-12-05 10:10:14.143 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:10:14 localhost nova_compute[280228]: 2025-12-05 10:10:14.179 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:10:14 localhost podman[316981]: 2025-12-05 10:10:14.220844344 +0000 UTC m=+0.091238471 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:10:14 localhost podman[316981]: 2025-12-05 10:10:14.261060255 +0000 UTC m=+0.131454362 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:10:14 localhost systemd[1]: tmp-crun.NTK4h3.mount: Deactivated successfully. Dec 5 05:10:14 localhost podman[316982]: 2025-12-05 10:10:14.284330267 +0000 UTC m=+0.153370433 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:10:14 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:10:14 localhost podman[316982]: 2025-12-05 10:10:14.318944285 +0000 UTC m=+0.187984461 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 5 05:10:14 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:10:14 localhost podman[316983]: 2025-12-05 10:10:14.342682962 +0000 UTC m=+0.204737944 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:10:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "format": "json"}]: dispatch Dec 5 05:10:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d4ffb057-c627-44bd-a232-379e570c56f8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:10:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d4ffb057-c627-44bd-a232-379e570c56f8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:10:14 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4ffb057-c627-44bd-a232-379e570c56f8' of type subvolume Dec 5 05:10:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:10:14.370+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4ffb057-c627-44bd-a232-379e570c56f8' of type subvolume Dec 5 05:10:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4ffb057-c627-44bd-a232-379e570c56f8", "force": true, "format": "json"}]: dispatch Dec 5 05:10:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4ffb057-c627-44bd-a232-379e570c56f8, vol_name:cephfs) < "" Dec 5 05:10:14 localhost podman[316983]: 2025-12-05 10:10:14.382887501 +0000 UTC m=+0.244942433 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:10:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d4ffb057-c627-44bd-a232-379e570c56f8'' moved to trashcan Dec 5 05:10:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:10:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4ffb057-c627-44bd-a232-379e570c56f8, vol_name:cephfs) < "" Dec 5 05:10:14 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:10:14 localhost nova_compute[280228]: 2025-12-05 10:10:14.509 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e142 do_prune osdmap full prune enabled Dec 5 05:10:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e143 e143: 6 total, 6 up, 6 in Dec 5 05:10:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e143: 6 total, 6 up, 6 in Dec 5 05:10:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:10:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:10:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:10:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:10:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:10:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:10:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 8.5 KiB/s wr, 4 op/s Dec 5 05:10:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:16.508 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:16.510 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:16.512 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:16.513 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[07ed252e-14eb-4126-88af-49da29c74a9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:16.984 2 INFO neutron.agent.securitygroups_rpc [None req-8748f31b-df2d-474f-9884-ef12f2b800a6 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:10:17 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:17.112 261902 INFO neutron.agent.linux.ip_lib [None req-46e67ac4-4b78-4518-ac13-d64aa9a8d989 - - - - - -] Device tap0ce031f2-5c cannot be used as it has no MAC address#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.142 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:17 localhost kernel: device tap0ce031f2-5c entered promiscuous mode Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.151 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:17 localhost ovn_controller[153000]: 2025-12-05T10:10:17Z|00256|binding|INFO|Claiming lport 0ce031f2-5c59-40bd-85a6-36a177744082 for this chassis. Dec 5 05:10:17 localhost ovn_controller[153000]: 2025-12-05T10:10:17Z|00257|binding|INFO|0ce031f2-5c59-40bd-85a6-36a177744082: Claiming unknown Dec 5 05:10:17 localhost NetworkManager[5960]: [1764929417.1549] manager: (tap0ce031f2-5c): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Dec 5 05:10:17 localhost systemd-udevd[317050]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:10:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:17.168 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e5f7489-728a-4560-982e-a2e03b2a2ad6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0ce031f2-5c59-40bd-85a6-36a177744082) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:17.171 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce031f2-5c59-40bd-85a6-36a177744082 in datapath 13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1 bound to our chassis#033[00m Dec 5 05:10:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:17.174 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port a4b12eca-f300-4d2d-9f17-449d34fe6dee IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:10:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:17.174 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:17.175 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[5b89f4b7-d5e1-4fcc-bdde-e03f40408fcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:17 localhost ovn_controller[153000]: 2025-12-05T10:10:17Z|00258|binding|INFO|Setting lport 0ce031f2-5c59-40bd-85a6-36a177744082 ovn-installed in OVS Dec 5 05:10:17 localhost ovn_controller[153000]: 2025-12-05T10:10:17Z|00259|binding|INFO|Setting lport 0ce031f2-5c59-40bd-85a6-36a177744082 up in Southbound Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.222 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.249 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 30 op/s Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.431 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.531 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.532 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.532 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.533 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.533 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:10:17 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:10:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, vol_name:cephfs) < "" Dec 5 05:10:17 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8943cd11-7849-44cf-bf5f-d6cc1442fa22/.meta.tmp' Dec 5 05:10:17 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8943cd11-7849-44cf-bf5f-d6cc1442fa22/.meta.tmp' to config b'/volumes/_nogroup/8943cd11-7849-44cf-bf5f-d6cc1442fa22/.meta' Dec 5 05:10:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, vol_name:cephfs) < "" Dec 5 05:10:17 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "format": "json"}]: dispatch Dec 5 05:10:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, vol_name:cephfs) < "" Dec 5 05:10:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, vol_name:cephfs) < "" Dec 5 05:10:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:10:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:10:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:17.783 2 INFO neutron.agent.securitygroups_rpc [None req-b54cea94-c31f-4e99-a73a-744908bef462 9ad212ec28c94cd483f6945e5dc23284 b0ebdf5f32df4c2586aaaef64b02a0b5 - - default default] Security group member updated ['44ae700d-5ab8-470d-aa76-cd2d76fd251c']#033[00m Dec 5 05:10:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:17.935 2 INFO neutron.agent.securitygroups_rpc [None req-5e3bb1c5-2806-4588-aff7-12e0b9f5ff87 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:10:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2413831405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:10:17 localhost nova_compute[280228]: 2025-12-05 10:10:17.977 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:10:18 localhost podman[317122]: Dec 5 05:10:18 localhost podman[317122]: 2025-12-05 10:10:18.326847594 +0000 UTC m=+0.100483265 container create 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:10:18 localhost systemd[1]: Started libpod-conmon-4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97.scope. Dec 5 05:10:18 localhost podman[317122]: 2025-12-05 10:10:18.284083576 +0000 UTC m=+0.057719267 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:10:18 localhost systemd[1]: Started libcrun container. Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.413 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.414 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:10:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345c2460686b931198f51d9bc3cbf526f817bde614c3ee842aa8abfed60f0dea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:10:18 localhost podman[317122]: 2025-12-05 10:10:18.437326984 +0000 UTC m=+0.210962655 container init 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:10:18 localhost podman[317122]: 2025-12-05 10:10:18.449686391 +0000 UTC m=+0.223322052 container start 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:10:18 localhost dnsmasq[317140]: started, version 2.85 cachesize 150 Dec 5 05:10:18 localhost dnsmasq[317140]: DNS service limited to local subnets Dec 5 05:10:18 localhost dnsmasq[317140]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:10:18 localhost dnsmasq[317140]: warning: no upstream servers configured Dec 5 05:10:18 localhost dnsmasq-dhcp[317140]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:10:18 localhost dnsmasq[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/addn_hosts - 0 addresses Dec 5 05:10:18 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/host Dec 5 05:10:18 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/opts Dec 5 05:10:18 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:18.524 261902 INFO neutron.agent.dhcp.agent [None req-9fb04d10-d80d-4f47-b823-1bb0345b05ec - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:16Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=23ee4333-866b-4430-8ca2-afbc11c7ed07, ip_allocation=immediate, mac_address=fa:16:3e:c8:13:f0, name=tempest-PortsTestJSON-21956600, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:10:12Z, description=, dns_domain=, id=13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-2056408350, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59255, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1988, status=ACTIVE, subnets=['5cc3ec10-aba1-4930-b740-72ece921bdb4'], tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:14Z, vlan_transparent=None, network_id=13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=2004, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:16Z on network 13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1#033[00m Dec 5 05:10:18 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:18.626 261902 INFO neutron.agent.dhcp.agent [None req-69153027-9cf2-45e1-80d7-8290f43aeeb7 - - - - - -] DHCP configuration for ports {'8349185b-e119-43d3-9648-297823530902'} is completed#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.631 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.633 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11175MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.634 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.634 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.708 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.709 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.709 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:10:18 localhost dnsmasq[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/addn_hosts - 1 addresses Dec 5 05:10:18 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/host Dec 5 05:10:18 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/opts Dec 5 05:10:18 localhost podman[317156]: 2025-12-05 10:10:18.735328018 +0000 UTC m=+0.045872394 container kill 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:10:18 localhost nova_compute[280228]: 2025-12-05 10:10:18.753 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:10:19 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:18.999 261902 INFO neutron.agent.dhcp.agent [None req-8f22e512-5870-4ab6-b99b-4e8db6b6d1b8 - - - - - -] DHCP configuration for ports {'23ee4333-866b-4430-8ca2-afbc11c7ed07'} is completed#033[00m Dec 5 05:10:19 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:19.213 2 INFO neutron.agent.securitygroups_rpc [None req-e87faffe-ea7d-4797-b288-333e1167da33 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:10:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3886367906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:10:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 5.2 KiB/s wr, 26 op/s Dec 5 05:10:19 localhost nova_compute[280228]: 2025-12-05 10:10:19.268 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:10:19 localhost nova_compute[280228]: 2025-12-05 10:10:19.273 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:10:19 localhost nova_compute[280228]: 2025-12-05 10:10:19.292 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:10:19 localhost nova_compute[280228]: 2025-12-05 10:10:19.295 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:10:19 localhost nova_compute[280228]: 2025-12-05 10:10:19.295 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:10:19 localhost nova_compute[280228]: 2025-12-05 10:10:19.513 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:19 localhost podman[239519]: time="2025-12-05T10:10:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:10:19 localhost podman[239519]: @ - - [05/Dec/2025:10:10:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157935 "" "Go-http-client/1.1" Dec 5 05:10:19 localhost podman[239519]: @ - - [05/Dec/2025:10:10:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19722 "" "Go-http-client/1.1" Dec 5 05:10:20 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:20.453 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:10:16Z, description=, device_id=67d87ad5-d34e-4955-9a2b-0a587f33bc37, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=23ee4333-866b-4430-8ca2-afbc11c7ed07, ip_allocation=immediate, mac_address=fa:16:3e:c8:13:f0, name=tempest-PortsTestJSON-21956600, network_id=13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, port_security_enabled=True, project_id=8eebb9e73adb4a259afe086ebdfad16e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['6de03d90-430f-407b-8d0d-b1ca66c7d4e8'], standard_attr_id=2004, status=DOWN, tags=[], tenant_id=8eebb9e73adb4a259afe086ebdfad16e, updated_at=2025-12-05T10:10:17Z on network 13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1#033[00m Dec 5 05:10:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:20.571 2 INFO neutron.agent.securitygroups_rpc [None req-494eb1d6-141d-4851-b653-0372decd0bc2 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']#033[00m Dec 5 05:10:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e143 do_prune osdmap full prune enabled Dec 5 05:10:20 localhost podman[317218]: 2025-12-05 10:10:20.702716964 +0000 UTC m=+0.069943290 container kill 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:10:20 localhost dnsmasq[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/addn_hosts - 1 addresses Dec 5 05:10:20 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/host Dec 5 05:10:20 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/opts Dec 5 05:10:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e144 e144: 6 total, 6 up, 6 in Dec 5 05:10:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e144: 6 total, 6 up, 6 in Dec 5 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:10:20 localhost systemd[1]: tmp-crun.qqptzO.mount: Deactivated successfully. Dec 5 05:10:20 localhost podman[317232]: 2025-12-05 10:10:20.810065557 +0000 UTC m=+0.080368548 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:10:20 localhost podman[317232]: 2025-12-05 10:10:20.817856616 +0000 UTC m=+0.088159627 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:10:20 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:10:20 localhost podman[317231]: 2025-12-05 10:10:20.907682323 +0000 UTC m=+0.179167751 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:10:20 localhost podman[317231]: 2025-12-05 10:10:20.966005547 +0000 UTC m=+0.237490965 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:10:20 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:10:20 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:20.991 261902 INFO neutron.agent.dhcp.agent [None req-3e4125d6-6292-4f86-9f2d-1c1c1fdd0945 - - - - - -] DHCP configuration for ports {'23ee4333-866b-4430-8ca2-afbc11c7ed07'} is completed#033[00m Dec 5 05:10:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "format": "json"}]: dispatch Dec 5 05:10:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:10:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:10:21 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8943cd11-7849-44cf-bf5f-d6cc1442fa22' of type subvolume Dec 5 05:10:21 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:10:21.102+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8943cd11-7849-44cf-bf5f-d6cc1442fa22' of type subvolume Dec 5 05:10:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8943cd11-7849-44cf-bf5f-d6cc1442fa22", "force": true, "format": "json"}]: dispatch Dec 5 05:10:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, vol_name:cephfs) < "" Dec 5 05:10:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8943cd11-7849-44cf-bf5f-d6cc1442fa22'' moved to trashcan Dec 5 05:10:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:10:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8943cd11-7849-44cf-bf5f-d6cc1442fa22, vol_name:cephfs) < "" Dec 5 05:10:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 14 KiB/s wr, 86 op/s Dec 5 05:10:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:21.371 2 INFO neutron.agent.securitygroups_rpc [None req-494eb1d6-141d-4851-b653-0372decd0bc2 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']#033[00m Dec 5 05:10:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:21.683 2 INFO neutron.agent.securitygroups_rpc [None req-c2056dc6-19e5-4308-8fd7-399bc3f94206 9ad212ec28c94cd483f6945e5dc23284 b0ebdf5f32df4c2586aaaef64b02a0b5 - - default default] Security group member updated ['44ae700d-5ab8-470d-aa76-cd2d76fd251c']#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.296 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.296 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.297 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:10:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:22.310 2 INFO neutron.agent.securitygroups_rpc [None req-ce37c36a-8998-41ea-86f9-f18c08b96615 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.435 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:22.529 2 INFO neutron.agent.securitygroups_rpc [None req-63c416d3-65f7-42d7-b358-fdac5e40015f 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:10:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:22.718 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:22.719 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.721 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.722 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.722 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:10:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:22.722 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:22 localhost nova_compute[280228]: 2025-12-05 10:10:22.723 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:10:22 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:22.723 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d52e70f3-c2dd-44fe-badd-34e812196760]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:22.786 2 INFO neutron.agent.securitygroups_rpc [None req-abc2a8d3-3f72-47ac-804d-613bd305ad59 71e3fab65ffb4bc788d27178ac1efd8e 285f12c8420045f3a7f55b60a915ce1e - - default default] Security group member updated ['a333efca-ba92-4e4d-867f-f07512d20795']#033[00m Dec 5 05:10:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:22.814 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:22 localhost dnsmasq[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/addn_hosts - 0 addresses Dec 5 05:10:22 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/host Dec 5 05:10:22 localhost dnsmasq-dhcp[317140]: read /var/lib/neutron/dhcp/13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1/opts Dec 5 05:10:22 localhost podman[317302]: 2025-12-05 10:10:22.856757849 +0000 UTC m=+0.068031552 container kill 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:10:23 localhost ovn_controller[153000]: 2025-12-05T10:10:23Z|00260|binding|INFO|Releasing lport 0ce031f2-5c59-40bd-85a6-36a177744082 from this chassis (sb_readonly=0) Dec 5 05:10:23 localhost ovn_controller[153000]: 2025-12-05T10:10:23Z|00261|binding|INFO|Setting lport 0ce031f2-5c59-40bd-85a6-36a177744082 down in Southbound Dec 5 05:10:23 localhost kernel: device tap0ce031f2-5c left promiscuous mode Dec 5 05:10:23 localhost nova_compute[280228]: 2025-12-05 10:10:23.003 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:23.008 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e5f7489-728a-4560-982e-a2e03b2a2ad6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0ce031f2-5c59-40bd-85a6-36a177744082) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:23.011 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0ce031f2-5c59-40bd-85a6-36a177744082 in datapath 13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1 unbound from our chassis#033[00m Dec 5 05:10:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:23.014 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:23.015 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8c84a1-0883-4ba3-a399-442794a99baf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:23 localhost nova_compute[280228]: 2025-12-05 10:10:23.029 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 13 KiB/s wr, 86 op/s Dec 5 05:10:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:24.394 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:24 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:24.396 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.398 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.515 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "eda6e56d-69d6-4ce5-9979-46195e7612b1", "format": "json"}]: dispatch Dec 5 05:10:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eda6e56d-69d6-4ce5-9979-46195e7612b1, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eda6e56d-69d6-4ce5-9979-46195e7612b1, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.698 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.718 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.719 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.720 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.720 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.721 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:24 localhost nova_compute[280228]: 2025-12-05 10:10:24.721 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:10:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 11 KiB/s wr, 73 op/s Dec 5 05:10:25 localhost nova_compute[280228]: 2025-12-05 10:10:25.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:25 localhost nova_compute[280228]: 2025-12-05 10:10:25.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:25 localhost dnsmasq[317140]: exiting on receipt of SIGTERM Dec 5 05:10:25 localhost podman[317340]: 2025-12-05 10:10:25.550351076 +0000 UTC m=+0.056909433 container kill 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:10:25 localhost systemd[1]: libpod-4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97.scope: Deactivated successfully. Dec 5 05:10:25 localhost podman[317354]: 2025-12-05 10:10:25.620717358 +0000 UTC m=+0.053338982 container died 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:10:25 localhost systemd[1]: tmp-crun.nbVo9r.mount: Deactivated successfully. Dec 5 05:10:25 localhost podman[317354]: 2025-12-05 10:10:25.656504642 +0000 UTC m=+0.089126226 container cleanup 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:10:25 localhost systemd[1]: libpod-conmon-4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97.scope: Deactivated successfully. Dec 5 05:10:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:25 localhost podman[317355]: 2025-12-05 10:10:25.704532322 +0000 UTC m=+0.133924048 container remove 4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13e8b1c4-cc48-48fe-b9d6-0aeae59f91c1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:10:25 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:25.732 261902 INFO neutron.agent.dhcp.agent [None req-1f9c9dc1-202c-4d34-be5c-b283d1fb507a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:25 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:25.938 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:26 localhost ovn_controller[153000]: 2025-12-05T10:10:26Z|00262|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:10:26 localhost nova_compute[280228]: 2025-12-05 10:10:26.201 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:26 localhost systemd[1]: var-lib-containers-storage-overlay-345c2460686b931198f51d9bc3cbf526f817bde614c3ee842aa8abfed60f0dea-merged.mount: Deactivated successfully. Dec 5 05:10:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b91ddbf5440f62d5ad363f318754dae75550ae3a061d4d1e0576c2a2d16ec97-userdata-shm.mount: Deactivated successfully. Dec 5 05:10:26 localhost systemd[1]: run-netns-qdhcp\x2d13e8b1c4\x2dcc48\x2d48fe\x2db9d6\x2d0aeae59f91c1.mount: Deactivated successfully. Dec 5 05:10:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:27.016 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:27.019 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:27.023 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:27.025 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[485c96d3-ff14-4dac-870c-1f634027c575]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:27 localhost openstack_network_exporter[241668]: ERROR 10:10:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:10:27 localhost openstack_network_exporter[241668]: ERROR 10:10:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:10:27 localhost openstack_network_exporter[241668]: Dec 5 05:10:27 localhost openstack_network_exporter[241668]: ERROR 10:10:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:10:27 localhost openstack_network_exporter[241668]: ERROR 10:10:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:10:27 localhost openstack_network_exporter[241668]: ERROR 10:10:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:10:27 localhost openstack_network_exporter[241668]: Dec 5 05:10:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 9.5 KiB/s wr, 53 op/s Dec 5 05:10:27 localhost nova_compute[280228]: 2025-12-05 10:10:27.479 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:27 localhost nova_compute[280228]: 2025-12-05 10:10:27.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:10:27 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:27.877 2 INFO neutron.agent.securitygroups_rpc [None req-16d6d17d-d3bb-489f-9b72-a5d5257b3158 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "eda6e56d-69d6-4ce5-9979-46195e7612b1_65643c99-e4ca-4ae3-b879-234a19b593b8", "force": true, "format": "json"}]: dispatch Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eda6e56d-69d6-4ce5-9979-46195e7612b1_65643c99-e4ca-4ae3-b879-234a19b593b8, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eda6e56d-69d6-4ce5-9979-46195e7612b1_65643c99-e4ca-4ae3-b879-234a19b593b8, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "eda6e56d-69d6-4ce5-9979-46195e7612b1", "force": true, "format": "json"}]: dispatch Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eda6e56d-69d6-4ce5-9979-46195e7612b1, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:10:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eda6e56d-69d6-4ce5-9979-46195e7612b1, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:28.400 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:10:28 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:28.937 2 INFO neutron.agent.securitygroups_rpc [None req-0c16f2e5-5e31-4773-aa99-a445d9488a1a 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 9.5 KiB/s wr, 53 op/s Dec 5 05:10:29 localhost nova_compute[280228]: 2025-12-05 10:10:29.518 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:10:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:10:31 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:31.041 2 INFO neutron.agent.securitygroups_rpc [None req-4ed9fe1a-c7d3-4fb9-9cde-3c0633a09029 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['15591b44-9c5d-4fa1-bdbd-86617728aba4']#033[00m Dec 5 05:10:31 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:31.048 261902 INFO neutron.agent.linux.ip_lib [None req-de2e908d-baad-4ba6-8091-3e4c50daeb8e - - - - - -] Device tap4fd4d903-00 cannot be used as it has no MAC address#033[00m Dec 5 05:10:31 localhost nova_compute[280228]: 2025-12-05 10:10:31.064 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:31 localhost systemd[1]: tmp-crun.ET9mEs.mount: Deactivated successfully. Dec 5 05:10:31 localhost kernel: device tap4fd4d903-00 entered promiscuous mode Dec 5 05:10:31 localhost NetworkManager[5960]: [1764929431.0783] manager: (tap4fd4d903-00): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Dec 5 05:10:31 localhost nova_compute[280228]: 2025-12-05 10:10:31.078 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:31 localhost ovn_controller[153000]: 2025-12-05T10:10:31Z|00263|binding|INFO|Claiming lport 4fd4d903-0049-4508-aaed-c57323f98f6a for this chassis. Dec 5 05:10:31 localhost ovn_controller[153000]: 2025-12-05T10:10:31Z|00264|binding|INFO|4fd4d903-0049-4508-aaed-c57323f98f6a: Claiming unknown Dec 5 05:10:31 localhost systemd-udevd[317418]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:10:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:31.087 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-39d551b3-1ec7-4de1-aa6d-edffdb207923', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39d551b3-1ec7-4de1-aa6d-edffdb207923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '285f12c8420045f3a7f55b60a915ce1e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbc75a5a-fd2e-4108-93af-f304f2f34345, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4fd4d903-0049-4508-aaed-c57323f98f6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:31.088 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 4fd4d903-0049-4508-aaed-c57323f98f6a in datapath 39d551b3-1ec7-4de1-aa6d-edffdb207923 bound to our chassis#033[00m Dec 5 05:10:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:31.090 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port c0af1bac-3e74-45ad-a6ba-d12c4c34ecc8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:10:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:31.090 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39d551b3-1ec7-4de1-aa6d-edffdb207923, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:31.090 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[622d4ab5-ddf7-4ea7-9fdd-606346e8822b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:31 localhost podman[317382]: 2025-12-05 10:10:31.09116272 +0000 UTC m=+0.099942247 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 5 05:10:31 localhost ovn_controller[153000]: 2025-12-05T10:10:31Z|00265|binding|INFO|Setting lport 4fd4d903-0049-4508-aaed-c57323f98f6a ovn-installed in OVS Dec 5 05:10:31 localhost ovn_controller[153000]: 2025-12-05T10:10:31Z|00266|binding|INFO|Setting lport 4fd4d903-0049-4508-aaed-c57323f98f6a up in Southbound Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost nova_compute[280228]: 2025-12-05 10:10:31.117 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:31 localhost nova_compute[280228]: 2025-12-05 10:10:31.119 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost podman[317382]: 2025-12-05 10:10:31.131590927 +0000 UTC m=+0.140370424 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost systemd[1]: tmp-crun.4uvYXv.mount: Deactivated successfully. Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost podman[317383]: 2025-12-05 10:10:31.141315904 +0000 UTC m=+0.151156904 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git) Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost podman[317383]: 2025-12-05 10:10:31.1483695 +0000 UTC m=+0.158210490 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=) Dec 5 05:10:31 localhost nova_compute[280228]: 2025-12-05 10:10:31.152 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:31 localhost journal[228791]: ethtool ioctl error on tap4fd4d903-00: No such device Dec 5 05:10:31 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:10:31 localhost nova_compute[280228]: 2025-12-05 10:10:31.173 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 15 KiB/s wr, 53 op/s Dec 5 05:10:32 localhost podman[317498]: Dec 5 05:10:32 localhost podman[317498]: 2025-12-05 10:10:32.126916241 +0000 UTC m=+0.092214432 container create cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39d551b3-1ec7-4de1-aa6d-edffdb207923, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:10:32 localhost systemd[1]: Started libpod-conmon-cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12.scope. Dec 5 05:10:32 localhost podman[317498]: 2025-12-05 10:10:32.082193093 +0000 UTC m=+0.047491314 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:10:32 localhost systemd[1]: Started libcrun container. Dec 5 05:10:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3257b6e8ed2010378069d0d1fbc20b0029ea101343a125647b123ff3e4c99976/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:10:32 localhost podman[317498]: 2025-12-05 10:10:32.1994845 +0000 UTC m=+0.164782691 container init cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39d551b3-1ec7-4de1-aa6d-edffdb207923, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:10:32 localhost podman[317498]: 2025-12-05 10:10:32.215210852 +0000 UTC m=+0.180509043 container start cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39d551b3-1ec7-4de1-aa6d-edffdb207923, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:10:32 localhost dnsmasq[317518]: started, version 2.85 cachesize 150 Dec 5 05:10:32 localhost dnsmasq[317518]: DNS service limited to local subnets Dec 5 05:10:32 localhost dnsmasq[317518]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:10:32 localhost dnsmasq[317518]: warning: no upstream servers configured Dec 5 05:10:32 localhost dnsmasq-dhcp[317518]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 5 05:10:32 localhost dnsmasq[317518]: read /var/lib/neutron/dhcp/39d551b3-1ec7-4de1-aa6d-edffdb207923/addn_hosts - 0 addresses Dec 5 05:10:32 localhost dnsmasq-dhcp[317518]: read /var/lib/neutron/dhcp/39d551b3-1ec7-4de1-aa6d-edffdb207923/host Dec 5 05:10:32 localhost dnsmasq-dhcp[317518]: read /var/lib/neutron/dhcp/39d551b3-1ec7-4de1-aa6d-edffdb207923/opts Dec 5 05:10:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:32.479 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:32.481 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:32.485 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:32.486 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[9837a35c-6fc5-47c7-8c53-e7c5bf6752cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:32 localhost nova_compute[280228]: 2025-12-05 10:10:32.529 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:32.558 261902 INFO neutron.agent.dhcp.agent [None req-a5cfe8d0-1a74-4a0a-bbf4-62c1f98782c1 - - - - - -] DHCP configuration for ports {'4528a925-9c34-4976-a1ec-c7486a4ab923'} is completed#033[00m Dec 5 05:10:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 2.1 KiB/s rd, 7.2 KiB/s wr, 5 op/s Dec 5 05:10:33 localhost podman[317536]: 2025-12-05 10:10:33.29733599 +0000 UTC m=+0.061201733 container kill cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39d551b3-1ec7-4de1-aa6d-edffdb207923, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:10:33 localhost dnsmasq[317518]: exiting on receipt of SIGTERM Dec 5 05:10:33 localhost systemd[1]: libpod-cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12.scope: Deactivated successfully. Dec 5 05:10:33 localhost podman[317554]: 2025-12-05 10:10:33.370806177 +0000 UTC m=+0.045619556 container died cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39d551b3-1ec7-4de1-aa6d-edffdb207923, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:10:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12-userdata-shm.mount: Deactivated successfully. Dec 5 05:10:33 localhost podman[317554]: 2025-12-05 10:10:33.416127893 +0000 UTC m=+0.090941232 container remove cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39d551b3-1ec7-4de1-aa6d-edffdb207923, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:10:33 localhost ovn_controller[153000]: 2025-12-05T10:10:33Z|00267|binding|INFO|Releasing lport 4fd4d903-0049-4508-aaed-c57323f98f6a from this chassis (sb_readonly=0) Dec 5 05:10:33 localhost kernel: device tap4fd4d903-00 left promiscuous mode Dec 5 05:10:33 localhost ovn_controller[153000]: 2025-12-05T10:10:33Z|00268|binding|INFO|Setting lport 4fd4d903-0049-4508-aaed-c57323f98f6a down in Southbound Dec 5 05:10:33 localhost nova_compute[280228]: 2025-12-05 10:10:33.429 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:33 localhost systemd[1]: libpod-conmon-cb91a41bc2b0c4d016f16d0c6e5edcf7d35023506738fd38b6fa820f5249fc12.scope: Deactivated successfully. Dec 5 05:10:33 localhost nova_compute[280228]: 2025-12-05 10:10:33.445 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:33 localhost systemd[1]: var-lib-containers-storage-overlay-3257b6e8ed2010378069d0d1fbc20b0029ea101343a125647b123ff3e4c99976-merged.mount: Deactivated successfully. Dec 5 05:10:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:34.069 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-39d551b3-1ec7-4de1-aa6d-edffdb207923', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39d551b3-1ec7-4de1-aa6d-edffdb207923', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '285f12c8420045f3a7f55b60a915ce1e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fbc75a5a-fd2e-4108-93af-f304f2f34345, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4fd4d903-0049-4508-aaed-c57323f98f6a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:34.072 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 4fd4d903-0049-4508-aaed-c57323f98f6a in datapath 39d551b3-1ec7-4de1-aa6d-edffdb207923 unbound from our chassis#033[00m Dec 5 05:10:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:34.075 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39d551b3-1ec7-4de1-aa6d-edffdb207923, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:34.076 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3cef76c8-5534-44ad-b6cd-f4c6670b344f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:34 localhost nova_compute[280228]: 2025-12-05 10:10:34.522 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 7.0 KiB/s wr, 3 op/s Dec 5 05:10:35 localhost systemd[1]: run-netns-qdhcp\x2d39d551b3\x2d1ec7\x2d4de1\x2daa6d\x2dedffdb207923.mount: Deactivated successfully. Dec 5 05:10:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:35.553 261902 INFO neutron.agent.dhcp.agent [None req-fcef6932-7b81-4f5d-a129-c43d0b36b320 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:35.553 261902 INFO neutron.agent.dhcp.agent [None req-fcef6932-7b81-4f5d-a129-c43d0b36b320 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:36 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:36.326 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "93f98e8e-8d98-4266-b4f1-bf5b6e06e924", "format": "json"}]: dispatch Dec 5 05:10:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:93f98e8e-8d98-4266-b4f1-bf5b6e06e924, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:93f98e8e-8d98-4266-b4f1-bf5b6e06e924, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:36 localhost ovn_controller[153000]: 2025-12-05T10:10:36Z|00269|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:10:36 localhost nova_compute[280228]: 2025-12-05 10:10:36.654 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e144 do_prune osdmap full prune enabled Dec 5 05:10:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e145 e145: 6 total, 6 up, 6 in Dec 5 05:10:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e145: 6 total, 6 up, 6 in Dec 5 05:10:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 6.4 KiB/s wr, 2 op/s Dec 5 05:10:37 localhost nova_compute[280228]: 2025-12-05 10:10:37.568 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 6.4 KiB/s wr, 2 op/s Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.292 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3e74704f-5b87-479b-a0f2-9cc31811fac6) old=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.294 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3e74704f-5b87-479b-a0f2-9cc31811fac6 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 updated#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.298 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.298 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc64bb2-5d93-47ba-a553-528d5b64a6a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:39 localhost nova_compute[280228]: 2025-12-05 10:10:39.525 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.963 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.965 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.967 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:39.968 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[186b1ed7-196b-42ef-9739-c2f314b383da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:40 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "d095bb6f-0ba8-4ed5-9e70-b40104e570d0", "format": "json"}]: dispatch Dec 5 05:10:40 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d095bb6f-0ba8-4ed5-9e70-b40104e570d0, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:40 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d095bb6f-0ba8-4ed5-9e70-b40104e570d0, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 146 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 5.2 KiB/s wr, 1 op/s Dec 5 05:10:42 localhost nova_compute[280228]: 2025-12-05 10:10:42.605 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 5.0 KiB/s wr, 1 op/s Dec 5 05:10:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:43.361 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:10:43 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:43.480 2 INFO neutron.agent.securitygroups_rpc [None req-d01f9f3e-ae84-4079-919f-76dbd61382ed 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['15591b44-9c5d-4fa1-bdbd-86617728aba4', '786481c0-4094-4fae-b4f0-fa1aecb51db7']#033[00m Dec 5 05:10:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "ee146b37-fd03-4431-a7d8-313e59df415c", "format": "json"}]: dispatch Dec 5 05:10:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ee146b37-fd03-4431-a7d8-313e59df415c, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:10:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ee146b37-fd03-4431-a7d8-313e59df415c, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:44 localhost podman[317578]: 2025-12-05 10:10:44.407281356 +0000 UTC m=+0.086284803 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:10:44 localhost podman[317578]: 2025-12-05 10:10:44.456881745 +0000 UTC m=+0.135885202 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:10:44 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:10:44 localhost nova_compute[280228]: 2025-12-05 10:10:44.528 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:44 localhost podman[317601]: 2025-12-05 10:10:44.534223473 +0000 UTC m=+0.099913940 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 5 05:10:44 localhost podman[317602]: 2025-12-05 10:10:44.593622631 +0000 UTC m=+0.155065158 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:10:44 localhost podman[317602]: 2025-12-05 10:10:44.606795775 +0000 UTC m=+0.168238312 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true) Dec 5 05:10:44 localhost podman[317601]: 2025-12-05 10:10:44.614970285 +0000 UTC m=+0.180660832 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:10:44 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:10:44 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:10:44 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:44.889 2 INFO neutron.agent.securitygroups_rpc [None req-3daa64bf-f117-4d8c-90b4-97c89eaa0f8a 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['786481c0-4094-4fae-b4f0-fa1aecb51db7']#033[00m Dec 5 05:10:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:10:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3357921071' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:10:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:10:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3357921071' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:10:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:10:45 Dec 5 05:10:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:10:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:10:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['images', 'vms', 'backups', 'volumes', 'manila_data', '.mgr', 'manila_metadata'] Dec 5 05:10:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:10:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:10:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:10:45 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:45.164 261902 INFO neutron.agent.linux.ip_lib [None req-ccfa0e94-1fea-43bf-8625-2bbd06e7a4d9 - - - - - -] Device tapa5983d60-4e cannot be used as it has no MAC address#033[00m Dec 5 05:10:45 localhost nova_compute[280228]: 2025-12-05 10:10:45.186 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:45 localhost kernel: device tapa5983d60-4e entered promiscuous mode Dec 5 05:10:45 localhost systemd-udevd[317648]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:10:45 localhost NetworkManager[5960]: [1764929445.1987] manager: (tapa5983d60-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Dec 5 05:10:45 localhost nova_compute[280228]: 2025-12-05 10:10:45.198 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:45 localhost ovn_controller[153000]: 2025-12-05T10:10:45Z|00270|binding|INFO|Claiming lport a5983d60-4ea5-4e0b-a307-69fc1aa2bab7 for this chassis. Dec 5 05:10:45 localhost ovn_controller[153000]: 2025-12-05T10:10:45Z|00271|binding|INFO|a5983d60-4ea5-4e0b-a307-69fc1aa2bab7: Claiming unknown Dec 5 05:10:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:45.208 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-36586aea-918e-4915-9f79-dcb64e06aa29', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36586aea-918e-4915-9f79-dcb64e06aa29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2376ed04-205a-4369-b306-aac7f14cb92e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a5983d60-4ea5-4e0b-a307-69fc1aa2bab7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:45.210 158820 INFO neutron.agent.ovn.metadata.agent [-] Port a5983d60-4ea5-4e0b-a307-69fc1aa2bab7 in datapath 36586aea-918e-4915-9f79-dcb64e06aa29 bound to our chassis#033[00m Dec 5 05:10:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:45.211 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36586aea-918e-4915-9f79-dcb64e06aa29 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:10:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:45.212 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[662baa2d-728a-4faf-8de4-733384a0ce74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:10:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:10:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:10:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost ovn_controller[153000]: 2025-12-05T10:10:45Z|00272|binding|INFO|Setting lport a5983d60-4ea5-4e0b-a307-69fc1aa2bab7 ovn-installed in OVS Dec 5 05:10:45 localhost ovn_controller[153000]: 2025-12-05T10:10:45Z|00273|binding|INFO|Setting lport a5983d60-4ea5-4e0b-a307-69fc1aa2bab7 up in Southbound Dec 5 05:10:45 localhost nova_compute[280228]: 2025-12-05 10:10:45.240 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost journal[228791]: ethtool ioctl error on tapa5983d60-4e: No such device Dec 5 05:10:45 localhost nova_compute[280228]: 2025-12-05 10:10:45.275 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 5.0 KiB/s wr, 1 op/s Dec 5 05:10:45 localhost nova_compute[280228]: 2025-12-05 10:10:45.301 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:10:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.5082007258514795e-05 of space, bias 4.0, pg target 0.019965277777777776 quantized to 16 (current 16) Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:10:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:10:45 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:45.600 2 INFO neutron.agent.securitygroups_rpc [None req-82d78816-841a-466f-b480-1a70bf437762 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e145 do_prune osdmap full prune enabled Dec 5 05:10:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e146 e146: 6 total, 6 up, 6 in Dec 5 05:10:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e146: 6 total, 6 up, 6 in Dec 5 05:10:46 localhost podman[317719]: Dec 5 05:10:46 localhost podman[317719]: 2025-12-05 10:10:46.121060819 +0000 UTC m=+0.079732693 container create b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:10:46 localhost systemd[1]: Started libpod-conmon-b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0.scope. Dec 5 05:10:46 localhost systemd[1]: Started libcrun container. Dec 5 05:10:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eedfc4133a81f3e77b73f47c6847b022359c68919d9c1afc35cd1b80183906c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:10:46 localhost podman[317719]: 2025-12-05 10:10:46.085520511 +0000 UTC m=+0.044192415 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:10:46 localhost podman[317719]: 2025-12-05 10:10:46.193631601 +0000 UTC m=+0.152303475 container init b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:10:46 localhost podman[317719]: 2025-12-05 10:10:46.200882623 +0000 UTC m=+0.159554557 container start b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:10:46 localhost dnsmasq[317736]: started, version 2.85 cachesize 150 Dec 5 05:10:46 localhost dnsmasq[317736]: DNS service limited to local subnets Dec 5 05:10:46 localhost dnsmasq[317736]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:10:46 localhost dnsmasq[317736]: warning: no upstream servers configured Dec 5 05:10:46 localhost dnsmasq-dhcp[317736]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:10:46 localhost dnsmasq[317736]: read /var/lib/neutron/dhcp/36586aea-918e-4915-9f79-dcb64e06aa29/addn_hosts - 0 addresses Dec 5 05:10:46 localhost dnsmasq-dhcp[317736]: read /var/lib/neutron/dhcp/36586aea-918e-4915-9f79-dcb64e06aa29/host Dec 5 05:10:46 localhost dnsmasq-dhcp[317736]: read /var/lib/neutron/dhcp/36586aea-918e-4915-9f79-dcb64e06aa29/opts Dec 5 05:10:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 6.7 KiB/s wr, 18 op/s Dec 5 05:10:47 localhost nova_compute[280228]: 2025-12-05 10:10:47.644 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:10:48.436 261902 INFO neutron.agent.dhcp.agent [None req-9249cd18-ed67-4344-a35c-60b6e3ddb0c5 - - - - - -] DHCP configuration for ports {'623af935-c9a8-4ef3-92d5-329086c41a7d'} is completed#033[00m Dec 5 05:10:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "6b71d1bf-ee94-491e-8179-33ce177e53ad", "format": "json"}]: dispatch Dec 5 05:10:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6b71d1bf-ee94-491e-8179-33ce177e53ad, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6b71d1bf-ee94-491e-8179-33ce177e53ad, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 6.7 KiB/s wr, 18 op/s Dec 5 05:10:49 localhost nova_compute[280228]: 2025-12-05 10:10:49.531 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:49 localhost podman[239519]: time="2025-12-05T10:10:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:10:49 localhost podman[239519]: @ - - [05/Dec/2025:10:10:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157928 "" "Go-http-client/1.1" Dec 5 05:10:49 localhost podman[239519]: @ - - [05/Dec/2025:10:10:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19724 "" "Go-http-client/1.1" Dec 5 05:10:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:49.928 2 INFO neutron.agent.securitygroups_rpc [None req-aae22781-47b1-4320-95ea-cd4c9a77b012 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:10:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:10:51 localhost podman[317737]: 2025-12-05 10:10:51.205231024 +0000 UTC m=+0.088024685 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:10:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 4.3 KiB/s wr, 18 op/s Dec 5 05:10:51 localhost podman[317737]: 2025-12-05 10:10:51.287673589 +0000 UTC m=+0.170467210 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:10:51 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:10:51 localhost podman[317738]: 2025-12-05 10:10:51.288798843 +0000 UTC m=+0.171415169 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:10:51 localhost podman[317738]: 2025-12-05 10:10:51.371691382 +0000 UTC m=+0.254307698 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:10:51 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:10:52 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:52.281 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3e74704f-5b87-479b-a0f2-9cc31811fac6) old=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:10:52 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:52.283 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3e74704f-5b87-479b-a0f2-9cc31811fac6 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 updated#033[00m Dec 5 05:10:52 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:52.286 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:10:52 localhost ovn_metadata_agent[158815]: 2025-12-05 10:10:52.287 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff307be-9ee9-4334-856d-6d2faf7d500e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:10:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:10:52 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3296225489' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:10:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:10:52 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3296225489' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:10:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "aa88b782-616d-444b-a472-35fb008023be", "format": "json"}]: dispatch Dec 5 05:10:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa88b782-616d-444b-a472-35fb008023be, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa88b782-616d-444b-a472-35fb008023be, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:52 localhost nova_compute[280228]: 2025-12-05 10:10:52.678 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 4.3 KiB/s wr, 19 op/s Dec 5 05:10:53 localhost neutron_sriov_agent[254996]: 2025-12-05 10:10:53.804 2 INFO neutron.agent.securitygroups_rpc [None req-cad5a74d-414c-4770-8214-a1cf51074fd1 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']#033[00m Dec 5 05:10:54 localhost nova_compute[280228]: 2025-12-05 10:10:54.534 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 4.3 KiB/s wr, 19 op/s Dec 5 05:10:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:10:57 localhost openstack_network_exporter[241668]: ERROR 10:10:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:10:57 localhost openstack_network_exporter[241668]: ERROR 10:10:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:10:57 localhost openstack_network_exporter[241668]: ERROR 10:10:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:10:57 localhost openstack_network_exporter[241668]: ERROR 10:10:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:10:57 localhost openstack_network_exporter[241668]: Dec 5 05:10:57 localhost openstack_network_exporter[241668]: ERROR 10:10:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:10:57 localhost openstack_network_exporter[241668]: Dec 5 05:10:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 5.9 KiB/s wr, 31 op/s Dec 5 05:10:57 localhost nova_compute[280228]: 2025-12-05 10:10:57.731 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:10:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "aa88b782-616d-444b-a472-35fb008023be_e64cf724-7e03-4a8e-beab-a8f412d1da44", "force": true, "format": "json"}]: dispatch Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa88b782-616d-444b-a472-35fb008023be_e64cf724-7e03-4a8e-beab-a8f412d1da44, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa88b782-616d-444b-a472-35fb008023be_e64cf724-7e03-4a8e-beab-a8f412d1da44, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "aa88b782-616d-444b-a472-35fb008023be", "force": true, "format": "json"}]: dispatch Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa88b782-616d-444b-a472-35fb008023be, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:10:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa88b782-616d-444b-a472-35fb008023be, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:10:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 4.0 KiB/s wr, 16 op/s Dec 5 05:10:59 localhost nova_compute[280228]: 2025-12-05 10:10:59.537 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 30 op/s Dec 5 05:11:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:11:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:11:02 localhost systemd[1]: tmp-crun.bpfOJK.mount: Deactivated successfully. Dec 5 05:11:02 localhost podman[317786]: 2025-12-05 10:11:02.206682489 +0000 UTC m=+0.094191755 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:11:02 localhost podman[317786]: 2025-12-05 10:11:02.220569385 +0000 UTC m=+0.108078691 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:11:02 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:11:02 localhost podman[317787]: 2025-12-05 10:11:02.270576525 +0000 UTC m=+0.152859321 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 05:11:02 localhost podman[317787]: 2025-12-05 10:11:02.289941818 +0000 UTC m=+0.172224614 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7) Dec 5 05:11:02 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:11:02 localhost nova_compute[280228]: 2025-12-05 10:11:02.768 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 9.5 KiB/s wr, 29 op/s Dec 5 05:11:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:03.917 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:11:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:03.918 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:11:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:03.919 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:11:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "6b71d1bf-ee94-491e-8179-33ce177e53ad_3fa5c92b-65c8-4238-9ca3-7e55fc957abc", "force": true, "format": "json"}]: dispatch Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6b71d1bf-ee94-491e-8179-33ce177e53ad_3fa5c92b-65c8-4238-9ca3-7e55fc957abc, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6b71d1bf-ee94-491e-8179-33ce177e53ad_3fa5c92b-65c8-4238-9ca3-7e55fc957abc, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "6b71d1bf-ee94-491e-8179-33ce177e53ad", "force": true, "format": "json"}]: dispatch Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6b71d1bf-ee94-491e-8179-33ce177e53ad, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6b71d1bf-ee94-491e-8179-33ce177e53ad, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:04 localhost nova_compute[280228]: 2025-12-05 10:11:04.541 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:11:04 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:11:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:11:04 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:11:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:11:04 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:11:04 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 7e770e99-2b21-41cf-9466-f31fe0cf9767 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:11:04 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 7e770e99-2b21-41cf-9466-f31fe0cf9767 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:11:04 localhost ceph-mgr[286454]: [progress INFO root] Completed event 7e770e99-2b21-41cf-9466-f31fe0cf9767 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:11:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:11:04 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:11:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 9.5 KiB/s wr, 29 op/s Dec 5 05:11:05 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:11:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:11:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:11:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:11:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:11:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:11:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:05 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:05.848 2 INFO neutron.agent.securitygroups_rpc [None req-aad81300-6ba5-4212-8b03-95be9bc3ee4e 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']#033[00m Dec 5 05:11:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e146 do_prune osdmap full prune enabled Dec 5 05:11:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e147 e147: 6 total, 6 up, 6 in Dec 5 05:11:06 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e147: 6 total, 6 up, 6 in Dec 5 05:11:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 19 op/s Dec 5 05:11:07 localhost nova_compute[280228]: 2025-12-05 10:11:07.808 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e147 do_prune osdmap full prune enabled Dec 5 05:11:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e148 e148: 6 total, 6 up, 6 in Dec 5 05:11:07 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e148: 6 total, 6 up, 6 in Dec 5 05:11:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 146 MiB data, 788 MiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 11 KiB/s wr, 3 op/s Dec 5 05:11:09 localhost nova_compute[280228]: 2025-12-05 10:11:09.546 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "ee146b37-fd03-4431-a7d8-313e59df415c_edc5a584-db9d-4a4f-9d75-00283c84ff62", "force": true, "format": "json"}]: dispatch Dec 5 05:11:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee146b37-fd03-4431-a7d8-313e59df415c_edc5a584-db9d-4a4f-9d75-00283c84ff62, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee146b37-fd03-4431-a7d8-313e59df415c_edc5a584-db9d-4a4f-9d75-00283c84ff62, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "ee146b37-fd03-4431-a7d8-313e59df415c", "force": true, "format": "json"}]: dispatch Dec 5 05:11:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee146b37-fd03-4431-a7d8-313e59df415c, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ee146b37-fd03-4431-a7d8-313e59df415c, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.763739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470763799, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1296, "num_deletes": 256, "total_data_size": 1367570, "memory_usage": 1395696, "flush_reason": "Manual Compaction"} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470777362, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1332740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28381, "largest_seqno": 29675, "table_properties": {"data_size": 1327068, "index_size": 2950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13220, "raw_average_key_size": 20, "raw_value_size": 1315086, "raw_average_value_size": 2013, "num_data_blocks": 130, "num_entries": 653, "num_filter_entries": 653, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929389, "oldest_key_time": 1764929389, "file_creation_time": 1764929470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13672 microseconds, and 4462 cpu microseconds. Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.777415) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1332740 bytes OK Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.777443) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.779643) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.779668) EVENT_LOG_v1 {"time_micros": 1764929470779661, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.779693) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1361626, prev total WAL file size 1361950, number of live WAL files 2. Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.780406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303135' seq:72057594037927935, type:22 .. '6C6F676D0034323636' seq:0, type:0; will stop at (end) Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1301KB)], [48(16MB)] Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470780455, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 18361939, "oldest_snapshot_seqno": -1} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 12610 keys, 17810767 bytes, temperature: kUnknown Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470880324, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 17810767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17737759, "index_size": 40395, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 337724, "raw_average_key_size": 26, "raw_value_size": 17521990, "raw_average_value_size": 1389, "num_data_blocks": 1532, "num_entries": 12610, "num_filter_entries": 12610, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.880663) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 17810767 bytes Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.882337) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.7 rd, 178.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 16.2 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(27.1) write-amplify(13.4) OK, records in: 13146, records dropped: 536 output_compression: NoCompression Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.882367) EVENT_LOG_v1 {"time_micros": 1764929470882353, "job": 28, "event": "compaction_finished", "compaction_time_micros": 99962, "compaction_time_cpu_micros": 52984, "output_level": 6, "num_output_files": 1, "total_output_size": 17810767, "num_input_records": 13146, "num_output_records": 12610, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470882839, "job": 28, "event": "table_file_deletion", "file_number": 50} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929470885642, "job": 28, "event": "table_file_deletion", "file_number": 48} Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.780327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.885720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.885728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.885731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.885734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:11:10 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:11:10.885737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:11:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s Dec 5 05:11:12 localhost nova_compute[280228]: 2025-12-05 10:11:12.861 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 25 KiB/s wr, 4 op/s Dec 5 05:11:14 localhost nova_compute[280228]: 2025-12-05 10:11:14.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:14 localhost nova_compute[280228]: 2025-12-05 10:11:14.549 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:11:15 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:15.160 2 INFO neutron.agent.securitygroups_rpc [None req-8c15d9ea-25f6-4f94-9a5a-880c38bb33fb 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:15 localhost podman[317911]: 2025-12-05 10:11:15.20465006 +0000 UTC m=+0.084029394 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:11:15 localhost podman[317911]: 2025-12-05 10:11:15.214259585 +0000 UTC m=+0.093638929 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:11:15 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:11:15 localhost podman[317912]: 2025-12-05 10:11:15.276115288 +0000 UTC m=+0.149642022 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Dec 5 05:11:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s wr, 1 op/s Dec 5 05:11:15 localhost podman[317913]: 2025-12-05 10:11:15.328537784 +0000 UTC m=+0.200689826 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 5 05:11:15 localhost podman[317912]: 2025-12-05 10:11:15.338119367 +0000 UTC m=+0.211646031 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 05:11:15 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:11:15 localhost podman[317913]: 2025-12-05 10:11:15.395141262 +0000 UTC m=+0.267293294 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:11:15 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:11:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "d095bb6f-0ba8-4ed5-9e70-b40104e570d0_18c17ccc-0586-4398-8488-62f94018964f", "force": true, "format": "json"}]: dispatch Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d095bb6f-0ba8-4ed5-9e70-b40104e570d0_18c17ccc-0586-4398-8488-62f94018964f, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d095bb6f-0ba8-4ed5-9e70-b40104e570d0_18c17ccc-0586-4398-8488-62f94018964f, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "d095bb6f-0ba8-4ed5-9e70-b40104e570d0", "force": true, "format": "json"}]: dispatch Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d095bb6f-0ba8-4ed5-9e70-b40104e570d0, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d095bb6f-0ba8-4ed5-9e70-b40104e570d0, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e148 do_prune osdmap full prune enabled Dec 5 05:11:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e149 e149: 6 total, 6 up, 6 in Dec 5 05:11:15 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e149: 6 total, 6 up, 6 in Dec 5 05:11:16 localhost systemd[1]: tmp-crun.du5KcB.mount: Deactivated successfully. Dec 5 05:11:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e149 do_prune osdmap full prune enabled Dec 5 05:11:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e150 e150: 6 total, 6 up, 6 in Dec 5 05:11:16 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e150: 6 total, 6 up, 6 in Dec 5 05:11:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:16.960 2 INFO neutron.agent.securitygroups_rpc [None req-ffc55049-227f-4f9b-b0f1-95b6549f2f8a 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['b0972671-904b-4941-be17-6352223dc520']#033[00m Dec 5 05:11:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:17.030 2 INFO neutron.agent.securitygroups_rpc [None req-08311682-69b0-4e5c-a393-5c4aab9e3dd1 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']#033[00m Dec 5 05:11:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 4 op/s Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.531 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.531 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:11:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:17.597 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 10.100.0.2 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:17.599 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:11:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:17.604 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:11:17 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:17.605 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed94bb1-8ba5-4cd8-b832-4e07171dfe0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e150 do_prune osdmap full prune enabled Dec 5 05:11:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e151 e151: 6 total, 6 up, 6 in Dec 5 05:11:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e151: 6 total, 6 up, 6 in Dec 5 05:11:17 localhost nova_compute[280228]: 2025-12-05 10:11:17.905 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:11:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4251045490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.025 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.092 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.092 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:11:18 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:18.319 2 INFO neutron.agent.securitygroups_rpc [None req-4e46051d-d7a3-4253-a7f0-9cf8592c0f42 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.350 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.352 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11177MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.352 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.353 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.460 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.461 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.462 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.528 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:11:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "93f98e8e-8d98-4266-b4f1-bf5b6e06e924_737ae99a-cc32-40c8-bb8b-961fe482b6b8", "force": true, "format": "json"}]: dispatch Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:93f98e8e-8d98-4266-b4f1-bf5b6e06e924_737ae99a-cc32-40c8-bb8b-961fe482b6b8, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:93f98e8e-8d98-4266-b4f1-bf5b6e06e924_737ae99a-cc32-40c8-bb8b-961fe482b6b8, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "snap_name": "93f98e8e-8d98-4266-b4f1-bf5b6e06e924", "force": true, "format": "json"}]: dispatch Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:93f98e8e-8d98-4266-b4f1-bf5b6e06e924, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta.tmp' to config b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36/.meta' Dec 5 05:11:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:93f98e8e-8d98-4266-b4f1-bf5b6e06e924, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:11:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/794925693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.992 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:11:18 localhost nova_compute[280228]: 2025-12-05 10:11:18.997 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:11:19 localhost nova_compute[280228]: 2025-12-05 10:11:19.027 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:11:19 localhost nova_compute[280228]: 2025-12-05 10:11:19.030 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:11:19 localhost nova_compute[280228]: 2025-12-05 10:11:19.031 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:11:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 146 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 13 KiB/s wr, 3 op/s Dec 5 05:11:19 localhost nova_compute[280228]: 2025-12-05 10:11:19.553 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:19 localhost podman[239519]: time="2025-12-05T10:11:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:11:19 localhost podman[239519]: @ - - [05/Dec/2025:10:11:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157928 "" "Go-http-client/1.1" Dec 5 05:11:19 localhost podman[239519]: @ - - [05/Dec/2025:10:11:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19740 "" "Go-http-client/1.1" Dec 5 05:11:20 localhost nova_compute[280228]: 2025-12-05 10:11:20.028 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:20.796 2 INFO neutron.agent.securitygroups_rpc [None req-828b9ecd-077c-4f2f-8083-1ef0b87cf212 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:20.796 2 INFO neutron.agent.securitygroups_rpc [None req-63f2b7b1-c1f0-4243-813a-65200aed5790 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 31 KiB/s wr, 7 op/s Dec 5 05:11:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:21.344 2 INFO neutron.agent.securitygroups_rpc [None req-63f2b7b1-c1f0-4243-813a-65200aed5790 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:21.394 2 INFO neutron.agent.securitygroups_rpc [None req-a93a3ac1-e950-4704-8bef-447f823d18a8 2c33b8c3808c4d2fb486611901223652 936331162fd849b28da8e38e2db0598a - - default default] Security group member updated ['95cf58e1-082a-420e-bea6-fa59114af408']#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.794 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.794 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.795 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:11:21 localhost nova_compute[280228]: 2025-12-05 10:11:21.795 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:11:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:11:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:11:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "format": "json"}]: dispatch Dec 5 05:11:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:11:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:11:22 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6205753a-9757-4af3-8ac2-9fbff2d55d36' of type subvolume Dec 5 05:11:22 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:11:22.172+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6205753a-9757-4af3-8ac2-9fbff2d55d36' of type subvolume Dec 5 05:11:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6205753a-9757-4af3-8ac2-9fbff2d55d36", "force": true, "format": "json"}]: dispatch Dec 5 05:11:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6205753a-9757-4af3-8ac2-9fbff2d55d36'' moved to trashcan Dec 5 05:11:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:11:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6205753a-9757-4af3-8ac2-9fbff2d55d36, vol_name:cephfs) < "" Dec 5 05:11:22 localhost podman[318013]: 2025-12-05 10:11:22.221324025 +0000 UTC m=+0.101466327 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:11:22 localhost podman[318013]: 2025-12-05 10:11:22.26686575 +0000 UTC m=+0.147008052 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible) Dec 5 05:11:22 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:11:22 localhost systemd[1]: tmp-crun.lY32B1.mount: Deactivated successfully. Dec 5 05:11:22 localhost podman[318014]: 2025-12-05 10:11:22.362395825 +0000 UTC m=+0.238946267 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:11:22 localhost podman[318014]: 2025-12-05 10:11:22.37858522 +0000 UTC m=+0.255135652 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:11:22 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:11:22 localhost nova_compute[280228]: 2025-12-05 10:11:22.912 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:23 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:23.115 2 INFO neutron.agent.securitygroups_rpc [None req-5be8e1bf-782e-4367-8d11-75bd094f07a1 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 816 B/s rd, 25 KiB/s wr, 6 op/s Dec 5 05:11:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:23.642 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.557 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.627 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.645 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.646 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.647 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.647 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.648 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:24 localhost nova_compute[280228]: 2025-12-05 10:11:24.648 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:11:24 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:24.746 2 INFO neutron.agent.securitygroups_rpc [None req-2d2aa7bb-9ffc-4eb9-9e7e-32b49c722eb1 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 481 B/s rd, 13 KiB/s wr, 3 op/s Dec 5 05:11:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:25.346 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=73051091-e669-449c-b6ce-0fa3950b46d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3e74704f-5b87-479b-a0f2-9cc31811fac6) old=Port_Binding(mac=['fa:16:3e:71:80:9e 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f11084e-99c9-47ba-aac5-b3f38c139d59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eebb9e73adb4a259afe086ebdfad16e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:25.348 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3e74704f-5b87-479b-a0f2-9cc31811fac6 in datapath 0f11084e-99c9-47ba-aac5-b3f38c139d59 updated#033[00m Dec 5 05:11:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:25.351 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f11084e-99c9-47ba-aac5-b3f38c139d59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:11:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:25.352 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[442d7003-5fd3-4902-9c52-44809c41072e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:25.379 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:25.381 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:11:25 localhost nova_compute[280228]: 2025-12-05 10:11:25.381 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e151 do_prune osdmap full prune enabled Dec 5 05:11:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e152 e152: 6 total, 6 up, 6 in Dec 5 05:11:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e152: 6 total, 6 up, 6 in Dec 5 05:11:26 localhost nova_compute[280228]: 2025-12-05 10:11:26.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:26 localhost nova_compute[280228]: 2025-12-05 10:11:26.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e152 do_prune osdmap full prune enabled Dec 5 05:11:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e153 e153: 6 total, 6 up, 6 in Dec 5 05:11:26 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e153: 6 total, 6 up, 6 in Dec 5 05:11:26 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:26.889 2 INFO neutron.agent.securitygroups_rpc [None req-b4396528-8b42-40af-ac75-cd0a49228f19 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['b0972671-904b-4941-be17-6352223dc520', '5983ed6d-e0ce-45eb-b8c9-1f29f23a87de', 'b7cbd0ad-c7a0-4c31-b4b2-a706028a8f44']#033[00m Dec 5 05:11:27 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:27.018 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:27 localhost openstack_network_exporter[241668]: ERROR 10:11:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:11:27 localhost openstack_network_exporter[241668]: ERROR 10:11:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:11:27 localhost openstack_network_exporter[241668]: ERROR 10:11:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:11:27 localhost openstack_network_exporter[241668]: ERROR 10:11:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:11:27 localhost openstack_network_exporter[241668]: Dec 5 05:11:27 localhost openstack_network_exporter[241668]: ERROR 10:11:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:11:27 localhost openstack_network_exporter[241668]: Dec 5 05:11:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 146 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 23 KiB/s wr, 5 op/s Dec 5 05:11:27 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:27.395 2 INFO neutron.agent.securitygroups_rpc [None req-2dcc9d40-3349-45e7-88d4-8dacaf7c5016 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['b7cbd0ad-c7a0-4c31-b4b2-a706028a8f44', '5983ed6d-e0ce-45eb-b8c9-1f29f23a87de']#033[00m Dec 5 05:11:27 localhost nova_compute[280228]: 2025-12-05 10:11:27.914 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:28 localhost nova_compute[280228]: 2025-12-05 10:11:28.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:11:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 146 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 9.9 KiB/s wr, 2 op/s Dec 5 05:11:29 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:29.342 2 INFO neutron.agent.securitygroups_rpc [None req-cc1df5d5-c8de-4978-bc3b-3ad0af9779bd 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:29 localhost nova_compute[280228]: 2025-12-05 10:11:29.560 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:30 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:30.384 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:11:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 146 MiB data, 798 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 12 op/s Dec 5 05:11:31 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:31.533 2 INFO neutron.agent.securitygroups_rpc [None req-95d48023-8205-4ffe-b872-52d057520238 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:32 localhost nova_compute[280228]: 2025-12-05 10:11:32.945 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:11:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:11:33 localhost systemd[1]: tmp-crun.aEdRKR.mount: Deactivated successfully. Dec 5 05:11:33 localhost podman[318061]: 2025-12-05 10:11:33.217341292 +0000 UTC m=+0.101749626 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:11:33 localhost podman[318061]: 2025-12-05 10:11:33.265074783 +0000 UTC m=+0.149483077 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3) Dec 5 05:11:33 localhost podman[318062]: 2025-12-05 10:11:33.274881704 +0000 UTC m=+0.152329674 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:11:33 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:11:33 localhost podman[318062]: 2025-12-05 10:11:33.287341215 +0000 UTC m=+0.164789155 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64) Dec 5 05:11:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 12 op/s Dec 5 05:11:33 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:11:34 localhost systemd[1]: tmp-crun.9BvDGd.mount: Deactivated successfully. Dec 5 05:11:34 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:34.466 2 INFO neutron.agent.securitygroups_rpc [None req-ed8e1157-f7f5-45ba-ad76-18d51438c9ec 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:34 localhost nova_compute[280228]: 2025-12-05 10:11:34.563 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:34 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:34.936 2 INFO neutron.agent.securitygroups_rpc [None req-123b23be-a9bb-4a1f-9d98-ce4fe835d14f 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:35.013 261902 INFO neutron.agent.linux.ip_lib [None req-c5db9761-84ce-4dad-a3aa-eeac5009465b - - - - - -] Device tap784c809d-4a cannot be used as it has no MAC address#033[00m Dec 5 05:11:35 localhost nova_compute[280228]: 2025-12-05 10:11:35.034 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:35 localhost kernel: device tap784c809d-4a entered promiscuous mode Dec 5 05:11:35 localhost nova_compute[280228]: 2025-12-05 10:11:35.044 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:35 localhost ovn_controller[153000]: 2025-12-05T10:11:35Z|00274|binding|INFO|Claiming lport 784c809d-4aaf-40c6-a686-08b07682612b for this chassis. Dec 5 05:11:35 localhost NetworkManager[5960]: [1764929495.0473] manager: (tap784c809d-4a): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Dec 5 05:11:35 localhost ovn_controller[153000]: 2025-12-05T10:11:35Z|00275|binding|INFO|784c809d-4aaf-40c6-a686-08b07682612b: Claiming unknown Dec 5 05:11:35 localhost systemd-udevd[318112]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:11:35 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:35.055 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-87f6b878-aa4e-482e-bb39-6a396918e692', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6b878-aa4e-482e-bb39-6a396918e692', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb6e251-39ec-46fa-beed-15149b68ad7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=784c809d-4aaf-40c6-a686-08b07682612b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:35 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:35.058 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 784c809d-4aaf-40c6-a686-08b07682612b in datapath 87f6b878-aa4e-482e-bb39-6a396918e692 bound to our chassis#033[00m Dec 5 05:11:35 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:35.060 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 87f6b878-aa4e-482e-bb39-6a396918e692 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:11:35 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:35.061 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[877b4f5b-6508-455b-ad8a-ec409aa94224]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost ovn_controller[153000]: 2025-12-05T10:11:35Z|00276|binding|INFO|Setting lport 784c809d-4aaf-40c6-a686-08b07682612b ovn-installed in OVS Dec 5 05:11:35 localhost ovn_controller[153000]: 2025-12-05T10:11:35Z|00277|binding|INFO|Setting lport 784c809d-4aaf-40c6-a686-08b07682612b up in Southbound Dec 5 05:11:35 localhost nova_compute[280228]: 2025-12-05 10:11:35.085 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost journal[228791]: ethtool ioctl error on tap784c809d-4a: No such device Dec 5 05:11:35 localhost nova_compute[280228]: 2025-12-05 10:11:35.126 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:35 localhost nova_compute[280228]: 2025-12-05 10:11:35.159 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 13 KiB/s wr, 10 op/s Dec 5 05:11:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e153 do_prune osdmap full prune enabled Dec 5 05:11:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e154 e154: 6 total, 6 up, 6 in Dec 5 05:11:35 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e154: 6 total, 6 up, 6 in Dec 5 05:11:36 localhost podman[318183]: Dec 5 05:11:36 localhost podman[318183]: 2025-12-05 10:11:36.055995166 +0000 UTC m=+0.106671187 container create ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:11:36 localhost systemd[1]: Started libpod-conmon-ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60.scope. Dec 5 05:11:36 localhost podman[318183]: 2025-12-05 10:11:36.002294981 +0000 UTC m=+0.052971052 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:11:36 localhost systemd[1]: tmp-crun.vQAaFG.mount: Deactivated successfully. Dec 5 05:11:36 localhost systemd[1]: Started libcrun container. Dec 5 05:11:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c920decbdff10e001b6deb4fece6db5def6c487c901082ba9aee950f94dcec01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:11:36 localhost podman[318183]: 2025-12-05 10:11:36.135668325 +0000 UTC m=+0.186344326 container init ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 5 05:11:36 localhost podman[318183]: 2025-12-05 10:11:36.14954277 +0000 UTC m=+0.200218771 container start ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:11:36 localhost dnsmasq[318202]: started, version 2.85 cachesize 150 Dec 5 05:11:36 localhost dnsmasq[318202]: DNS service limited to local subnets Dec 5 05:11:36 localhost dnsmasq[318202]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:11:36 localhost dnsmasq[318202]: warning: no upstream servers configured Dec 5 05:11:36 localhost dnsmasq-dhcp[318202]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:11:36 localhost dnsmasq[318202]: read /var/lib/neutron/dhcp/87f6b878-aa4e-482e-bb39-6a396918e692/addn_hosts - 0 addresses Dec 5 05:11:36 localhost dnsmasq-dhcp[318202]: read /var/lib/neutron/dhcp/87f6b878-aa4e-482e-bb39-6a396918e692/host Dec 5 05:11:36 localhost dnsmasq-dhcp[318202]: read /var/lib/neutron/dhcp/87f6b878-aa4e-482e-bb39-6a396918e692/opts Dec 5 05:11:36 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:36.210 261902 INFO neutron.agent.dhcp.agent [None req-c5db9761-84ce-4dad-a3aa-eeac5009465b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:34Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e425c774-44c4-44b3-abfa-160e31c3bd79, ip_allocation=immediate, mac_address=fa:16:3e:08:80:6b, name=tempest-PortsIpV6TestJSON-1713629917, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:29Z, description=, dns_domain=, id=87f6b878-aa4e-482e-bb39-6a396918e692, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-2068683517, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=24102, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2220, status=ACTIVE, subnets=['018dcfc6-1153-483b-a30a-ebd782ec02ad'], tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:30Z, vlan_transparent=None, network_id=87f6b878-aa4e-482e-bb39-6a396918e692, port_security_enabled=True, project_id=dba761eb9482439aa79c2d9ffe5c0dfa, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c5d500d-a686-46a9-8ad0-737ee529f53d'], standard_attr_id=2234, status=DOWN, tags=[], tenant_id=dba761eb9482439aa79c2d9ffe5c0dfa, updated_at=2025-12-05T10:11:34Z on network 87f6b878-aa4e-482e-bb39-6a396918e692#033[00m Dec 5 05:11:36 localhost dnsmasq[318202]: read /var/lib/neutron/dhcp/87f6b878-aa4e-482e-bb39-6a396918e692/addn_hosts - 1 addresses Dec 5 05:11:36 localhost dnsmasq-dhcp[318202]: read /var/lib/neutron/dhcp/87f6b878-aa4e-482e-bb39-6a396918e692/host Dec 5 05:11:36 localhost podman[318221]: 2025-12-05 10:11:36.362416998 +0000 UTC m=+0.048875707 container kill ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 5 05:11:36 localhost dnsmasq-dhcp[318202]: read /var/lib/neutron/dhcp/87f6b878-aa4e-482e-bb39-6a396918e692/opts Dec 5 05:11:36 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:36.450 261902 INFO neutron.agent.dhcp.agent [None req-606e52fa-2211-4a2e-83fa-e92b08c2c56c - - - - - -] DHCP configuration for ports {'eff4d527-1b9c-439e-817d-e685f9eb06e5'} is completed#033[00m Dec 5 05:11:36 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:36.531 2 INFO neutron.agent.securitygroups_rpc [None req-55cddc01-31b2-4409-889c-7116c714aef0 3bf116b8d9ba48c49fc0ffd65e2e4fda 8cb09bff88504c818b275bb285c1a663 - - default default] Security group rule updated ['cbc56729-8ff3-450f-b83f-5c75940540af']#033[00m Dec 5 05:11:36 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:36.574 261902 INFO neutron.agent.dhcp.agent [None req-8cffedf4-b929-4c7c-bf6f-ca68f4bda175 - - - - - -] DHCP configuration for ports {'e425c774-44c4-44b3-abfa-160e31c3bd79'} is completed#033[00m Dec 5 05:11:36 localhost dnsmasq[318202]: exiting on receipt of SIGTERM Dec 5 05:11:36 localhost podman[318259]: 2025-12-05 10:11:36.763474307 +0000 UTC m=+0.059313717 container kill ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:11:36 localhost systemd[1]: libpod-ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60.scope: Deactivated successfully. Dec 5 05:11:36 localhost podman[318274]: 2025-12-05 10:11:36.839297309 +0000 UTC m=+0.056367607 container died ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:36 localhost podman[318274]: 2025-12-05 10:11:36.879438118 +0000 UTC m=+0.096508376 container cleanup ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:11:36 localhost systemd[1]: libpod-conmon-ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60.scope: Deactivated successfully. Dec 5 05:11:36 localhost podman[318275]: 2025-12-05 10:11:36.924458467 +0000 UTC m=+0.139967817 container remove ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-87f6b878-aa4e-482e-bb39-6a396918e692, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:11:36 localhost ovn_controller[153000]: 2025-12-05T10:11:36Z|00278|binding|INFO|Releasing lport 784c809d-4aaf-40c6-a686-08b07682612b from this chassis (sb_readonly=0) Dec 5 05:11:36 localhost nova_compute[280228]: 2025-12-05 10:11:36.936 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:36 localhost kernel: device tap784c809d-4a left promiscuous mode Dec 5 05:11:36 localhost ovn_controller[153000]: 2025-12-05T10:11:36Z|00279|binding|INFO|Setting lport 784c809d-4aaf-40c6-a686-08b07682612b down in Southbound Dec 5 05:11:36 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:36.945 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-87f6b878-aa4e-482e-bb39-6a396918e692', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6b878-aa4e-482e-bb39-6a396918e692', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dba761eb9482439aa79c2d9ffe5c0dfa', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb6e251-39ec-46fa-beed-15149b68ad7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=784c809d-4aaf-40c6-a686-08b07682612b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:36 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:36.947 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 784c809d-4aaf-40c6-a686-08b07682612b in datapath 87f6b878-aa4e-482e-bb39-6a396918e692 unbound from our chassis#033[00m Dec 5 05:11:36 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:36.951 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 87f6b878-aa4e-482e-bb39-6a396918e692 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:11:36 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:36.952 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[aee83cc5-d6bf-4efd-b311-d16a12f6c503]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:36 localhost nova_compute[280228]: 2025-12-05 10:11:36.956 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:37 localhost systemd[1]: tmp-crun.mKyath.mount: Deactivated successfully. Dec 5 05:11:37 localhost systemd[1]: var-lib-containers-storage-overlay-c920decbdff10e001b6deb4fece6db5def6c487c901082ba9aee950f94dcec01-merged.mount: Deactivated successfully. Dec 5 05:11:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecad34898f3a359ba55b09fb8068872f7477b385d373ad998c7b4c8d2c7f0f60-userdata-shm.mount: Deactivated successfully. Dec 5 05:11:37 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:37.158 2 INFO neutron.agent.securitygroups_rpc [None req-965108e7-c0e8-4e67-9212-37e12b776ebc 6453145ee998467c9ec05e62f8e927f2 8eebb9e73adb4a259afe086ebdfad16e - - default default] Security group member updated ['6de03d90-430f-407b-8d0d-b1ca66c7d4e8']#033[00m Dec 5 05:11:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 171 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 926 KiB/s wr, 43 op/s Dec 5 05:11:37 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:37.355 261902 INFO neutron.agent.dhcp.agent [None req-fe629a88-c646-4e9d-8285-ab190905356b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:37 localhost systemd[1]: run-netns-qdhcp\x2d87f6b878\x2daa4e\x2d482e\x2dbb39\x2d6a396918e692.mount: Deactivated successfully. Dec 5 05:11:37 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:37.356 261902 INFO neutron.agent.dhcp.agent [None req-fe629a88-c646-4e9d-8285-ab190905356b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:37 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:37.357 261902 INFO neutron.agent.dhcp.agent [None req-fe629a88-c646-4e9d-8285-ab190905356b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:37 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:37.865 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:37 localhost nova_compute[280228]: 2025-12-05 10:11:37.990 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:11:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, vol_name:cephfs) < "" Dec 5 05:11:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b7c2d7d8-57b5-40a5-9e26-8515c67f1048/.meta.tmp' Dec 5 05:11:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b7c2d7d8-57b5-40a5-9e26-8515c67f1048/.meta.tmp' to config b'/volumes/_nogroup/b7c2d7d8-57b5-40a5-9e26-8515c67f1048/.meta' Dec 5 05:11:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, vol_name:cephfs) < "" Dec 5 05:11:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "format": "json"}]: dispatch Dec 5 05:11:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, vol_name:cephfs) < "" Dec 5 05:11:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, vol_name:cephfs) < "" Dec 5 05:11:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:11:38 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:11:38 localhost ovn_controller[153000]: 2025-12-05T10:11:38Z|00280|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:11:38 localhost nova_compute[280228]: 2025-12-05 10:11:38.209 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:38 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:38.394 2 INFO neutron.agent.securitygroups_rpc [None req-285387f2-eddd-474f-887d-e0e2d505f1be 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e154 do_prune osdmap full prune enabled Dec 5 05:11:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e155 e155: 6 total, 6 up, 6 in Dec 5 05:11:38 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e155: 6 total, 6 up, 6 in Dec 5 05:11:38 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:38.948 2 INFO neutron.agent.securitygroups_rpc [None req-aa155b96-5a12-44b5-9136-21c1b0d4baef 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 171 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 595 KiB/s rd, 1.1 MiB/s wr, 42 op/s Dec 5 05:11:39 localhost nova_compute[280228]: 2025-12-05 10:11:39.564 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:11:40 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2527535918' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:11:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:11:40 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2527535918' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:11:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 193 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 2.7 MiB/s wr, 99 op/s Dec 5 05:11:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:11:41 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2922772668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:11:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:11:41 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2922772668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:11:42 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:42.143 261902 INFO neutron.agent.linux.ip_lib [None req-c8a29ce8-c990-43ff-8690-167f433c863d - - - - - -] Device tap602e9250-ff cannot be used as it has no MAC address#033[00m Dec 5 05:11:42 localhost nova_compute[280228]: 2025-12-05 10:11:42.166 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:42 localhost kernel: device tap602e9250-ff entered promiscuous mode Dec 5 05:11:42 localhost ovn_controller[153000]: 2025-12-05T10:11:42Z|00281|binding|INFO|Claiming lport 602e9250-ffba-460b-bf2c-90883470bcd1 for this chassis. Dec 5 05:11:42 localhost ovn_controller[153000]: 2025-12-05T10:11:42Z|00282|binding|INFO|602e9250-ffba-460b-bf2c-90883470bcd1: Claiming unknown Dec 5 05:11:42 localhost NetworkManager[5960]: [1764929502.1743] manager: (tap602e9250-ff): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Dec 5 05:11:42 localhost nova_compute[280228]: 2025-12-05 10:11:42.173 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:42 localhost systemd-udevd[318313]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:11:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:42.196 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69f403c7-fdab-47f1-b187-c07e850c0fa0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=602e9250-ffba-460b-bf2c-90883470bcd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:42.199 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 602e9250-ffba-460b-bf2c-90883470bcd1 in datapath b8ed7b03-9c44-4dc6-8b7f-c34d51698071 bound to our chassis#033[00m Dec 5 05:11:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:42.200 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b8ed7b03-9c44-4dc6-8b7f-c34d51698071 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:11:42 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:42.202 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[681c81c8-3343-432e-b3a0-be9fd044252e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:42 localhost ovn_controller[153000]: 2025-12-05T10:11:42Z|00283|binding|INFO|Setting lport 602e9250-ffba-460b-bf2c-90883470bcd1 ovn-installed in OVS Dec 5 05:11:42 localhost ovn_controller[153000]: 2025-12-05T10:11:42Z|00284|binding|INFO|Setting lport 602e9250-ffba-460b-bf2c-90883470bcd1 up in Southbound Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost nova_compute[280228]: 2025-12-05 10:11:42.204 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost journal[228791]: ethtool ioctl error on tap602e9250-ff: No such device Dec 5 05:11:42 localhost nova_compute[280228]: 2025-12-05 10:11:42.249 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:42 localhost nova_compute[280228]: 2025-12-05 10:11:42.279 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:43 localhost nova_compute[280228]: 2025-12-05 10:11:43.026 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:43 localhost podman[318384]: Dec 5 05:11:43 localhost podman[318384]: 2025-12-05 10:11:43.081507404 +0000 UTC m=+0.077833124 container create 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:11:43 localhost systemd[1]: Started libpod-conmon-748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281.scope. Dec 5 05:11:43 localhost systemd[1]: Started libcrun container. Dec 5 05:11:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3833b8b245e867ad2ddf0294718442287f3479e09bae721a89b23bd8315841a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:11:43 localhost podman[318384]: 2025-12-05 10:11:43.144371579 +0000 UTC m=+0.140697289 container init 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:43 localhost podman[318384]: 2025-12-05 10:11:43.048801692 +0000 UTC m=+0.045127422 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:11:43 localhost podman[318384]: 2025-12-05 10:11:43.153049224 +0000 UTC m=+0.149374944 container start 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:11:43 localhost dnsmasq[318402]: started, version 2.85 cachesize 150 Dec 5 05:11:43 localhost dnsmasq[318402]: DNS service limited to local subnets Dec 5 05:11:43 localhost dnsmasq[318402]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:11:43 localhost dnsmasq[318402]: warning: no upstream servers configured Dec 5 05:11:43 localhost dnsmasq-dhcp[318402]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:11:43 localhost dnsmasq[318402]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/addn_hosts - 0 addresses Dec 5 05:11:43 localhost dnsmasq-dhcp[318402]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/host Dec 5 05:11:43 localhost dnsmasq-dhcp[318402]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/opts Dec 5 05:11:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 193 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 2.7 MiB/s wr, 105 op/s Dec 5 05:11:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:43.523 261902 INFO neutron.agent.dhcp.agent [None req-064c8f31-089b-4bb1-b348-24706518cee7 - - - - - -] DHCP configuration for ports {'c8fbfaad-5508-4e8d-a954-694c6ab48d92'} is completed#033[00m Dec 5 05:11:43 localhost dnsmasq[318402]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/addn_hosts - 0 addresses Dec 5 05:11:43 localhost dnsmasq-dhcp[318402]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/host Dec 5 05:11:43 localhost dnsmasq-dhcp[318402]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/opts Dec 5 05:11:43 localhost podman[318420]: 2025-12-05 10:11:43.61800866 +0000 UTC m=+0.032214157 container kill 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:11:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:43.957 261902 INFO neutron.agent.dhcp.agent [None req-3bc6af6e-3c0b-4261-9026-c5930150baca - - - - - -] DHCP configuration for ports {'602e9250-ffba-460b-bf2c-90883470bcd1', 'c8fbfaad-5508-4e8d-a954-694c6ab48d92'} is completed#033[00m Dec 5 05:11:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "format": "json"}]: dispatch Dec 5 05:11:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:11:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:11:44 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b7c2d7d8-57b5-40a5-9e26-8515c67f1048' of type subvolume Dec 5 05:11:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:11:44.296+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b7c2d7d8-57b5-40a5-9e26-8515c67f1048' of type subvolume Dec 5 05:11:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b7c2d7d8-57b5-40a5-9e26-8515c67f1048", "force": true, "format": "json"}]: dispatch Dec 5 05:11:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, vol_name:cephfs) < "" Dec 5 05:11:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b7c2d7d8-57b5-40a5-9e26-8515c67f1048'' moved to trashcan Dec 5 05:11:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:11:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b7c2d7d8-57b5-40a5-9e26-8515c67f1048, vol_name:cephfs) < "" Dec 5 05:11:44 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:44.381 2 INFO neutron.agent.securitygroups_rpc [None req-ee3bec68-6531-4fdf-bfc7-09d2f8472e15 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:44 localhost nova_compute[280228]: 2025-12-05 10:11:44.568 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:44 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:44.590 2 INFO neutron.agent.securitygroups_rpc [None req-35f96a62-bc9a-44a1-b1ac-958011ae1ee9 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:44 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:44.727 2 INFO neutron.agent.securitygroups_rpc [None req-4e3d1bd9-c627-4f4a-847b-61ac809c1b12 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:44.791 261902 INFO neutron.agent.linux.ip_lib [None req-d2dfa1bf-953f-4318-8b05-67d056d5e52e - - - - - -] Device tapac1614bb-7e cannot be used as it has no MAC address#033[00m Dec 5 05:11:44 localhost nova_compute[280228]: 2025-12-05 10:11:44.818 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:44 localhost kernel: device tapac1614bb-7e entered promiscuous mode Dec 5 05:11:44 localhost NetworkManager[5960]: [1764929504.8254] manager: (tapac1614bb-7e): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Dec 5 05:11:44 localhost ovn_controller[153000]: 2025-12-05T10:11:44Z|00285|binding|INFO|Claiming lport ac1614bb-7eed-4a73-8346-a79b8a1800be for this chassis. Dec 5 05:11:44 localhost ovn_controller[153000]: 2025-12-05T10:11:44Z|00286|binding|INFO|ac1614bb-7eed-4a73-8346-a79b8a1800be: Claiming unknown Dec 5 05:11:44 localhost nova_compute[280228]: 2025-12-05 10:11:44.827 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:44.835 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-13031b06-d963-40b3-b655-dc0aa50c254c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13031b06-d963-40b3-b655-dc0aa50c254c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f447a3a6-f3ed-4c82-81a5-ccd3154d7f54, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ac1614bb-7eed-4a73-8346-a79b8a1800be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:44.836 158820 INFO neutron.agent.ovn.metadata.agent [-] Port ac1614bb-7eed-4a73-8346-a79b8a1800be in datapath 13031b06-d963-40b3-b655-dc0aa50c254c bound to our chassis#033[00m Dec 5 05:11:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:44.839 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 13031b06-d963-40b3-b655-dc0aa50c254c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:11:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:44.840 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8517f4-4dad-4184-bdda-88dbc6b4cf7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e155 do_prune osdmap full prune enabled Dec 5 05:11:44 localhost ovn_controller[153000]: 2025-12-05T10:11:44Z|00287|binding|INFO|Setting lport ac1614bb-7eed-4a73-8346-a79b8a1800be ovn-installed in OVS Dec 5 05:11:44 localhost ovn_controller[153000]: 2025-12-05T10:11:44Z|00288|binding|INFO|Setting lport ac1614bb-7eed-4a73-8346-a79b8a1800be up in Southbound Dec 5 05:11:44 localhost nova_compute[280228]: 2025-12-05 10:11:44.870 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 e156: 6 total, 6 up, 6 in Dec 5 05:11:44 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e156: 6 total, 6 up, 6 in Dec 5 05:11:44 localhost nova_compute[280228]: 2025-12-05 10:11:44.910 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:44 localhost nova_compute[280228]: 2025-12-05 10:11:44.941 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:11:45 Dec 5 05:11:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:11:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:11:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['vms', 'images', 'backups', 'manila_data', 'volumes', '.mgr', 'manila_metadata'] Dec 5 05:11:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:11:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:11:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:11:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:11:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:11:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:11:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:11:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 193 MiB data, 884 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 1.5 MiB/s wr, 62 op/s Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001483655255443886 of space, bias 1.0, pg target 0.29623649933696256 quantized to 32 (current 32) Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:11:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 6.843025893355668e-05 of space, bias 4.0, pg target 0.05447048611111111 quantized to 16 (current 16) Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:11:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:11:45 localhost podman[318503]: Dec 5 05:11:45 localhost podman[318503]: 2025-12-05 10:11:45.698766159 +0000 UTC m=+0.078437463 container create 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:11:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:11:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:11:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:11:45 localhost systemd[1]: Started libpod-conmon-5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b.scope. Dec 5 05:11:45 localhost systemd[1]: Started libcrun container. Dec 5 05:11:45 localhost podman[318503]: 2025-12-05 10:11:45.665515061 +0000 UTC m=+0.045186395 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:11:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c316627e9d1552fbaea7a8a38e4fad71720bae0458a80dbb69c932ffe7102d44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:11:45 localhost podman[318503]: 2025-12-05 10:11:45.778425087 +0000 UTC m=+0.158096421 container init 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:45 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:45.779 2 INFO neutron.agent.securitygroups_rpc [None req-ca5d9241-35ac-438d-91f3-a12934d4a96c 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:45 localhost dnsmasq[318551]: started, version 2.85 cachesize 150 Dec 5 05:11:45 localhost dnsmasq[318551]: DNS service limited to local subnets Dec 5 05:11:45 localhost dnsmasq[318551]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:11:45 localhost dnsmasq[318551]: warning: no upstream servers configured Dec 5 05:11:45 localhost dnsmasq-dhcp[318551]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:11:45 localhost dnsmasq[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/addn_hosts - 0 addresses Dec 5 05:11:45 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/host Dec 5 05:11:45 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/opts Dec 5 05:11:45 localhost podman[318517]: 2025-12-05 10:11:45.826628563 +0000 UTC m=+0.087842190 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:11:45 localhost ovn_controller[153000]: 2025-12-05T10:11:45Z|00289|binding|INFO|Removing iface tap602e9250-ff ovn-installed in OVS Dec 5 05:11:45 localhost ovn_controller[153000]: 2025-12-05T10:11:45Z|00290|binding|INFO|Removing lport 602e9250-ffba-460b-bf2c-90883470bcd1 ovn-installed in OVS Dec 5 05:11:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:45.865 158820 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d86c922c-56ce-4380-8784-9ad345dcfd30 with type ""#033[00m Dec 5 05:11:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:45.866 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69f403c7-fdab-47f1-b187-c07e850c0fa0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=602e9250-ffba-460b-bf2c-90883470bcd1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:45 localhost nova_compute[280228]: 2025-12-05 10:11:45.866 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:45.867 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 602e9250-ffba-460b-bf2c-90883470bcd1 in datapath b8ed7b03-9c44-4dc6-8b7f-c34d51698071 unbound from our chassis#033[00m Dec 5 05:11:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:45.869 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8ed7b03-9c44-4dc6-8b7f-c34d51698071, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:11:45 localhost podman[318518]: 2025-12-05 10:11:45.870124965 +0000 UTC m=+0.130338892 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:45.870 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb2084f-faf9-458b-989c-8b4b9b5776ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:45 localhost nova_compute[280228]: 2025-12-05 10:11:45.874 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:45 localhost podman[318518]: 2025-12-05 10:11:45.879936785 +0000 UTC m=+0.140150752 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Dec 5 05:11:45 localhost podman[318519]: 2025-12-05 10:11:45.838662582 +0000 UTC m=+0.098879379 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 05:11:45 localhost podman[318517]: 2025-12-05 10:11:45.893658606 +0000 UTC m=+0.154872203 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:11:45 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:11:45 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:11:45 localhost podman[318519]: 2025-12-05 10:11:45.922796698 +0000 UTC m=+0.183013505 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:11:45 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:11:45 localhost podman[318503]: 2025-12-05 10:11:45.946779282 +0000 UTC m=+0.326450586 container start 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:11:45 localhost dnsmasq[318402]: exiting on receipt of SIGTERM Dec 5 05:11:45 localhost podman[318597]: 2025-12-05 10:11:45.997929639 +0000 UTC m=+0.049852478 container kill 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:11:45 localhost systemd[1]: libpod-748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281.scope: Deactivated successfully. Dec 5 05:11:46 localhost podman[318612]: 2025-12-05 10:11:46.064802406 +0000 UTC m=+0.050115175 container died 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:46 localhost podman[318612]: 2025-12-05 10:11:46.095590618 +0000 UTC m=+0.080903357 container cleanup 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:11:46 localhost systemd[1]: libpod-conmon-748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281.scope: Deactivated successfully. Dec 5 05:11:46 localhost podman[318614]: 2025-12-05 10:11:46.120448449 +0000 UTC m=+0.093701189 container remove 748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:11:46 localhost nova_compute[280228]: 2025-12-05 10:11:46.132 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:46 localhost kernel: device tap602e9250-ff left promiscuous mode Dec 5 05:11:46 localhost nova_compute[280228]: 2025-12-05 10:11:46.145 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:46.205 261902 INFO neutron.agent.dhcp.agent [None req-943f2ce9-b4f5-4c36-a3fe-95d8833eb903 - - - - - -] Synchronizing state#033[00m Dec 5 05:11:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:46.210 261902 INFO neutron.agent.dhcp.agent [None req-f4a5ec8a-e025-4d93-8846-633410bd215a - - - - - -] DHCP configuration for ports {'5ee77985-783b-4bf3-9d7d-abdd88df6464'} is completed#033[00m Dec 5 05:11:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:46.368 261902 INFO neutron.agent.dhcp.agent [None req-7603fbd6-8a67-4ff1-ab7f-ee44739b02b7 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 5 05:11:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:46.370 261902 INFO neutron.agent.dhcp.agent [-] Starting network b8ed7b03-9c44-4dc6-8b7f-c34d51698071 dhcp configuration#033[00m Dec 5 05:11:46 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:46.691 2 INFO neutron.agent.securitygroups_rpc [None req-2472a8ec-d91a-458d-b5a9-cca43a6faa0b 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:46 localhost systemd[1]: tmp-crun.cnmqgY.mount: Deactivated successfully. Dec 5 05:11:46 localhost systemd[1]: var-lib-containers-storage-overlay-e3833b8b245e867ad2ddf0294718442287f3479e09bae721a89b23bd8315841a-merged.mount: Deactivated successfully. Dec 5 05:11:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-748f4960e5b9a0d5f36bd73738a6482cf4ea45bdf6b7fe1f8575681294b91281-userdata-shm.mount: Deactivated successfully. Dec 5 05:11:46 localhost systemd[1]: run-netns-qdhcp\x2db8ed7b03\x2d9c44\x2d4dc6\x2d8b7f\x2dc34d51698071.mount: Deactivated successfully. Dec 5 05:11:47 localhost ceph-mgr[286454]: [devicehealth INFO root] Check health Dec 5 05:11:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 193 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 1.4 MiB/s wr, 104 op/s Dec 5 05:11:48 localhost nova_compute[280228]: 2025-12-05 10:11:48.064 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost snmpd[66746]: empty variable list in _query Dec 5 05:11:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 193 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 1.2 MiB/s wr, 92 op/s Dec 5 05:11:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:49.357 2 INFO neutron.agent.securitygroups_rpc [None req-f87f428a-cfa8-4117-93ac-ddf0a6843c9e 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:49.538 261902 INFO neutron.agent.linux.ip_lib [None req-65b6aafe-3642-47f6-888d-4184a1d4a108 - - - - - -] Device tap0553a7c1-3d cannot be used as it has no MAC address#033[00m Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.571 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.574 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 05:11:49 localhost kernel: device tap0553a7c1-3d entered promiscuous mode Dec 5 05:11:49 localhost NetworkManager[5960]: [1764929509.5825] manager: (tap0553a7c1-3d): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.582 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost ovn_controller[153000]: 2025-12-05T10:11:49Z|00291|binding|INFO|Claiming lport 0553a7c1-3d50-4159-9f92-c695603d8f4b for this chassis. Dec 5 05:11:49 localhost ovn_controller[153000]: 2025-12-05T10:11:49Z|00292|binding|INFO|0553a7c1-3d50-4159-9f92-c695603d8f4b: Claiming unknown Dec 5 05:11:49 localhost systemd-udevd[318653]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:11:49 localhost ovn_controller[153000]: 2025-12-05T10:11:49Z|00293|binding|INFO|Setting lport 0553a7c1-3d50-4159-9f92-c695603d8f4b ovn-installed in OVS Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.595 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:49.600 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69f403c7-fdab-47f1-b187-c07e850c0fa0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0553a7c1-3d50-4159-9f92-c695603d8f4b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:49 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:49.603 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0553a7c1-3d50-4159-9f92-c695603d8f4b in datapath b8ed7b03-9c44-4dc6-8b7f-c34d51698071 bound to our chassis#033[00m Dec 5 05:11:49 localhost ovn_controller[153000]: 2025-12-05T10:11:49Z|00294|binding|INFO|Setting lport 0553a7c1-3d50-4159-9f92-c695603d8f4b up in Southbound Dec 5 05:11:49 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:49.604 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b8ed7b03-9c44-4dc6-8b7f-c34d51698071 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:11:49 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:49.605 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[3109dc8b-3abe-4ae6-baf8-e640d326e509]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.615 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost journal[228791]: ethtool ioctl error on tap0553a7c1-3d: No such device Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.664 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost nova_compute[280228]: 2025-12-05 10:11:49.697 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:49 localhost podman[239519]: time="2025-12-05T10:11:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:11:49 localhost podman[239519]: @ - - [05/Dec/2025:10:11:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159745 "" "Go-http-client/1.1" Dec 5 05:11:49 localhost podman[239519]: @ - - [05/Dec/2025:10:11:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20209 "" "Go-http-client/1.1" Dec 5 05:11:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:50.151 2 INFO neutron.agent.securitygroups_rpc [None req-10ca296f-d221-4980-916d-de770cb006e3 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:50 localhost podman[318722]: Dec 5 05:11:50 localhost podman[318722]: 2025-12-05 10:11:50.51885611 +0000 UTC m=+0.070856381 container create d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:11:50 localhost systemd[1]: Started libpod-conmon-d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511.scope. Dec 5 05:11:50 localhost podman[318722]: 2025-12-05 10:11:50.478053371 +0000 UTC m=+0.030053652 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:11:50 localhost systemd[1]: tmp-crun.k9SsrS.mount: Deactivated successfully. Dec 5 05:11:50 localhost systemd[1]: Started libcrun container. Dec 5 05:11:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e0845c1d0d43d14d48b0b1b422abb7f329a79180fb8e85aea3f0833dd2aafe5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:11:50 localhost podman[318722]: 2025-12-05 10:11:50.627692102 +0000 UTC m=+0.179692363 container init d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 5 05:11:50 localhost podman[318722]: 2025-12-05 10:11:50.638340168 +0000 UTC m=+0.190340429 container start d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:11:50 localhost dnsmasq[318742]: started, version 2.85 cachesize 150 Dec 5 05:11:50 localhost dnsmasq[318742]: DNS service limited to local subnets Dec 5 05:11:50 localhost dnsmasq[318742]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:11:50 localhost dnsmasq[318742]: warning: no upstream servers configured Dec 5 05:11:50 localhost dnsmasq-dhcp[318742]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:11:50 localhost dnsmasq[318742]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/addn_hosts - 0 addresses Dec 5 05:11:50 localhost dnsmasq-dhcp[318742]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/host Dec 5 05:11:50 localhost dnsmasq-dhcp[318742]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/opts Dec 5 05:11:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:50.701 261902 INFO neutron.agent.dhcp.agent [None req-65b6aafe-3642-47f6-888d-4184a1d4a108 - - - - - -] Finished network b8ed7b03-9c44-4dc6-8b7f-c34d51698071 dhcp configuration#033[00m Dec 5 05:11:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:50.702 261902 INFO neutron.agent.dhcp.agent [None req-7603fbd6-8a67-4ff1-ab7f-ee44739b02b7 - - - - - -] Synchronizing state complete#033[00m Dec 5 05:11:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:50.708 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:48Z, description=, device_id=9d5b5baf-fbfa-4251-a34d-ea34d3d785c1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f6bee248-a4aa-47ef-84bd-c4925767b72a, ip_allocation=immediate, mac_address=fa:16:3e:b7:eb:d4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:41Z, description=, dns_domain=, id=13031b06-d963-40b3-b655-dc0aa50c254c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1559753480, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19683, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2288, status=ACTIVE, subnets=['de0f3580-146f-4f78-8790-19e550ab4de5'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:43Z, vlan_transparent=None, network_id=13031b06-d963-40b3-b655-dc0aa50c254c, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2313, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:48Z on network 13031b06-d963-40b3-b655-dc0aa50c254c#033[00m Dec 5 05:11:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:50 localhost dnsmasq[318742]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/addn_hosts - 0 addresses Dec 5 05:11:50 localhost dnsmasq-dhcp[318742]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/host Dec 5 05:11:50 localhost dnsmasq-dhcp[318742]: read /var/lib/neutron/dhcp/b8ed7b03-9c44-4dc6-8b7f-c34d51698071/opts Dec 5 05:11:50 localhost podman[318775]: 2025-12-05 10:11:50.881234206 +0000 UTC m=+0.057517433 container kill d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:11:50 localhost dnsmasq[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/addn_hosts - 1 addresses Dec 5 05:11:50 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/host Dec 5 05:11:50 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/opts Dec 5 05:11:50 localhost podman[318788]: 2025-12-05 10:11:50.937663473 +0000 UTC m=+0.059278056 container kill 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:11:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:50.971 261902 INFO neutron.agent.dhcp.agent [None req-850d5d89-4bdf-4941-8395-d8be889b1e02 - - - - - -] DHCP configuration for ports {'c8fbfaad-5508-4e8d-a954-694c6ab48d92'} is completed#033[00m Dec 5 05:11:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:51.057 158820 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 92326300-c76e-441f-9830-9128138fbddb with type ""#033[00m Dec 5 05:11:51 localhost ovn_controller[153000]: 2025-12-05T10:11:51Z|00295|binding|INFO|Removing iface tap0553a7c1-3d ovn-installed in OVS Dec 5 05:11:51 localhost ovn_controller[153000]: 2025-12-05T10:11:51Z|00296|binding|INFO|Removing lport 0553a7c1-3d50-4159-9f92-c695603d8f4b ovn-installed in OVS Dec 5 05:11:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:51.060 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8ed7b03-9c44-4dc6-8b7f-c34d51698071', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69f403c7-fdab-47f1-b187-c07e850c0fa0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0553a7c1-3d50-4159-9f92-c695603d8f4b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:11:51 localhost nova_compute[280228]: 2025-12-05 10:11:51.060 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:51.062 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0553a7c1-3d50-4159-9f92-c695603d8f4b in datapath b8ed7b03-9c44-4dc6-8b7f-c34d51698071 unbound from our chassis#033[00m Dec 5 05:11:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:51.064 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b8ed7b03-9c44-4dc6-8b7f-c34d51698071 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:11:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:11:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:11:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:11:51.066 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[15b8d877-eb8b-4447-aca6-8d3f1e5252d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:11:51 localhost nova_compute[280228]: 2025-12-05 10:11:51.067 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:51.117 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:11:48Z, description=, device_id=9d5b5baf-fbfa-4251-a34d-ea34d3d785c1, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f6bee248-a4aa-47ef-84bd-c4925767b72a, ip_allocation=immediate, mac_address=fa:16:3e:b7:eb:d4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:11:41Z, description=, dns_domain=, id=13031b06-d963-40b3-b655-dc0aa50c254c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1559753480, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19683, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2288, status=ACTIVE, subnets=['de0f3580-146f-4f78-8790-19e550ab4de5'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:43Z, vlan_transparent=None, network_id=13031b06-d963-40b3-b655-dc0aa50c254c, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2313, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:11:48Z on network 13031b06-d963-40b3-b655-dc0aa50c254c#033[00m Dec 5 05:11:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/.meta.tmp' Dec 5 05:11:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/.meta.tmp' to config b'/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/.meta' Dec 5 05:11:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:11:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "format": "json"}]: dispatch Dec 5 05:11:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:11:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:11:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:11:51 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:11:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:51.268 261902 INFO neutron.agent.dhcp.agent [None req-b8294bb3-a4fc-451a-814d-9b3b63982a4a - - - - - -] DHCP configuration for ports {'f6bee248-a4aa-47ef-84bd-c4925767b72a'} is completed#033[00m Dec 5 05:11:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 76 op/s Dec 5 05:11:51 localhost dnsmasq[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/addn_hosts - 1 addresses Dec 5 05:11:51 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/host Dec 5 05:11:51 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/opts Dec 5 05:11:51 localhost podman[318851]: 2025-12-05 10:11:51.323236919 +0000 UTC m=+0.062069092 container kill 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:11:51 localhost dnsmasq[318742]: exiting on receipt of SIGTERM Dec 5 05:11:51 localhost podman[318866]: 2025-12-05 10:11:51.390805318 +0000 UTC m=+0.062669581 container kill d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:51 localhost systemd[1]: libpod-d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511.scope: Deactivated successfully. Dec 5 05:11:51 localhost podman[318885]: 2025-12-05 10:11:51.460748069 +0000 UTC m=+0.049049343 container died d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:11:51 localhost podman[318885]: 2025-12-05 10:11:51.505620933 +0000 UTC m=+0.093922207 container remove d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b8ed7b03-9c44-4dc6-8b7f-c34d51698071, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:11:51 localhost systemd[1]: var-lib-containers-storage-overlay-8e0845c1d0d43d14d48b0b1b422abb7f329a79180fb8e85aea3f0833dd2aafe5-merged.mount: Deactivated successfully. Dec 5 05:11:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511-userdata-shm.mount: Deactivated successfully. Dec 5 05:11:51 localhost systemd[1]: libpod-conmon-d354e16148d2594a33f1d1d40d676ca27c537a434d8cd4c4173b07bf3d6ff511.scope: Deactivated successfully. Dec 5 05:11:51 localhost kernel: device tap0553a7c1-3d left promiscuous mode Dec 5 05:11:51 localhost nova_compute[280228]: 2025-12-05 10:11:51.572 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:51 localhost nova_compute[280228]: 2025-12-05 10:11:51.583 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:51.865 261902 INFO neutron.agent.dhcp.agent [None req-0546aecb-055e-4bb7-998e-d71caf5ebad0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:11:51 localhost systemd[1]: run-netns-qdhcp\x2db8ed7b03\x2d9c44\x2d4dc6\x2d8b7f\x2dc34d51698071.mount: Deactivated successfully. Dec 5 05:11:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:11:51.886 261902 INFO neutron.agent.dhcp.agent [None req-84c4ebd7-d369-41c5-83be-9442f3c75026 - - - - - -] DHCP configuration for ports {'f6bee248-a4aa-47ef-84bd-c4925767b72a'} is completed#033[00m Dec 5 05:11:51 localhost ovn_controller[153000]: 2025-12-05T10:11:51Z|00297|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:11:51 localhost nova_compute[280228]: 2025-12-05 10:11:51.934 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:52 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:52.061 2 INFO neutron.agent.securitygroups_rpc [None req-c97b0d8d-542f-470c-a7c0-bdee005052ed 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:52 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:52.481 2 INFO neutron.agent.securitygroups_rpc [None req-ecdd3e4a-f10e-40b7-adec-da44faf1234f 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:52 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:52.821 2 INFO neutron.agent.securitygroups_rpc [None req-1ee97cf8-7398-44f0-bcbd-01950c106314 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:11:53 localhost nova_compute[280228]: 2025-12-05 10:11:53.093 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:11:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:11:53 localhost systemd[1]: tmp-crun.Tz14f6.mount: Deactivated successfully. Dec 5 05:11:53 localhost podman[318916]: 2025-12-05 10:11:53.195136243 +0000 UTC m=+0.078364621 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:11:53 localhost podman[318916]: 2025-12-05 10:11:53.209776081 +0000 UTC m=+0.093004499 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:11:53 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:11:53 localhost podman[318915]: 2025-12-05 10:11:53.293170844 +0000 UTC m=+0.176515555 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:11:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 20 KiB/s wr, 72 op/s Dec 5 05:11:53 localhost podman[318915]: 2025-12-05 10:11:53.326548196 +0000 UTC m=+0.209892917 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible) Dec 5 05:11:53 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:11:54 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:11:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:11:54 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/.meta.tmp' Dec 5 05:11:54 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/.meta.tmp' to config b'/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/.meta' Dec 5 05:11:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:11:54 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "format": "json"}]: dispatch Dec 5 05:11:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:11:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:11:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:11:54 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:11:54 localhost nova_compute[280228]: 2025-12-05 10:11:54.575 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 69 op/s Dec 5 05:11:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:11:57 localhost openstack_network_exporter[241668]: ERROR 10:11:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:11:57 localhost openstack_network_exporter[241668]: ERROR 10:11:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:11:57 localhost openstack_network_exporter[241668]: ERROR 10:11:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:11:57 localhost openstack_network_exporter[241668]: ERROR 10:11:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:11:57 localhost openstack_network_exporter[241668]: Dec 5 05:11:57 localhost openstack_network_exporter[241668]: ERROR 10:11:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:11:57 localhost openstack_network_exporter[241668]: Dec 5 05:11:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 23 KiB/s wr, 61 op/s Dec 5 05:11:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "auth_id": "Joe", "tenant_id": "f4c34f38ddb048808ef72391bdda40b5", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:11:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, tenant_id:f4c34f38ddb048808ef72391bdda40b5, vol_name:cephfs) < "" Dec 5 05:11:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Dec 5 05:11:57 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 5 05:11:57 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID Joe with tenant f4c34f38ddb048808ef72391bdda40b5 Dec 5 05:11:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:11:57 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:11:57 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:11:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, tenant_id:f4c34f38ddb048808ef72391bdda40b5, vol_name:cephfs) < "" Dec 5 05:11:58 localhost nova_compute[280228]: 2025-12-05 10:11:58.126 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 5 05:11:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:11:58 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797", "osd", "allow rw pool=manila_data namespace=fsvolumens_f85fdc57-8808-499d-89b5-dab3ea53a537", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:11:58 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:58.742 2 INFO neutron.agent.securitygroups_rpc [None req-0878b731-cddc-4396-af6e-f1551053cbd3 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:11:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 193 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 14 KiB/s wr, 25 op/s Dec 5 05:11:59 localhost nova_compute[280228]: 2025-12-05 10:11:59.578 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:11:59 localhost neutron_sriov_agent[254996]: 2025-12-05 10:11:59.947 2 INFO neutron.agent.securitygroups_rpc [None req-c90784b4-3fe0-420c-9674-eb8afca62c54 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:12:00 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2570026049' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:12:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:12:00 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2570026049' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:12:00 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:00.670 2 INFO neutron.agent.securitygroups_rpc [None req-73dd46a2-4352-469e-9b13-904ee05be5c1 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:01 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/.meta.tmp' Dec 5 05:12:01 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/.meta.tmp' to config b'/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/.meta' Dec 5 05:12:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "format": "json"}]: dispatch Dec 5 05:12:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:01 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:01.305 2 INFO neutron.agent.securitygroups_rpc [None req-a333ab83-84ff-4adc-85ac-6248a883a1ec 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 29 KiB/s wr, 31 op/s Dec 5 05:12:01 localhost podman[318981]: 2025-12-05 10:12:01.321626629 +0000 UTC m=+0.040558203 container kill 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:01 localhost dnsmasq[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/addn_hosts - 0 addresses Dec 5 05:12:01 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/host Dec 5 05:12:01 localhost dnsmasq-dhcp[318551]: read /var/lib/neutron/dhcp/13031b06-d963-40b3-b655-dc0aa50c254c/opts Dec 5 05:12:01 localhost nova_compute[280228]: 2025-12-05 10:12:01.441 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:01 localhost ovn_controller[153000]: 2025-12-05T10:12:01Z|00298|binding|INFO|Releasing lport ac1614bb-7eed-4a73-8346-a79b8a1800be from this chassis (sb_readonly=0) Dec 5 05:12:01 localhost ovn_controller[153000]: 2025-12-05T10:12:01Z|00299|binding|INFO|Setting lport ac1614bb-7eed-4a73-8346-a79b8a1800be down in Southbound Dec 5 05:12:01 localhost kernel: device tapac1614bb-7e left promiscuous mode Dec 5 05:12:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:01.451 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-13031b06-d963-40b3-b655-dc0aa50c254c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13031b06-d963-40b3-b655-dc0aa50c254c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f447a3a6-f3ed-4c82-81a5-ccd3154d7f54, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ac1614bb-7eed-4a73-8346-a79b8a1800be) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:01.452 158820 INFO neutron.agent.ovn.metadata.agent [-] Port ac1614bb-7eed-4a73-8346-a79b8a1800be in datapath 13031b06-d963-40b3-b655-dc0aa50c254c unbound from our chassis#033[00m Dec 5 05:12:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:01.453 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 13031b06-d963-40b3-b655-dc0aa50c254c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:12:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:01.453 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d1bb7487-8558-4e1a-b3b6-ecbf824a095b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:01 localhost nova_compute[280228]: 2025-12-05 10:12:01.469 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:01 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:01.584 2 INFO neutron.agent.securitygroups_rpc [None req-aba445b9-58e1-4b57-95a7-f56c26bc6c4b 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:02 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:02.160 2 INFO neutron.agent.securitygroups_rpc [None req-1b9b59bb-fd0c-4e00-8bab-7b331e80efc1 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:03 localhost nova_compute[280228]: 2025-12-05 10:12:03.157 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 21 KiB/s wr, 17 op/s Dec 5 05:12:03 localhost sshd[319003]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:12:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:03.918 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:12:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:03.918 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:12:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:03.919 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:12:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:12:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:12:04 localhost systemd[1]: tmp-crun.49lOrl.mount: Deactivated successfully. Dec 5 05:12:04 localhost podman[319005]: 2025-12-05 10:12:04.209795209 +0000 UTC m=+0.092760172 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git) Dec 5 05:12:04 localhost podman[319005]: 2025-12-05 10:12:04.22878328 +0000 UTC m=+0.111748233 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:12:04 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:12:04 localhost podman[319004]: 2025-12-05 10:12:04.305613943 +0000 UTC m=+0.190577416 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:12:04 localhost podman[319004]: 2025-12-05 10:12:04.321634663 +0000 UTC m=+0.206598136 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 05:12:04 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:12:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "Joe", "tenant_id": "a1984fed702d4461879e97dd7c6fc401", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:12:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, tenant_id:a1984fed702d4461879e97dd7c6fc401, vol_name:cephfs) < "" Dec 5 05:12:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Dec 5 05:12:04 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 5 05:12:04 localhost ceph-mgr[286454]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Dec 5 05:12:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, tenant_id:a1984fed702d4461879e97dd7c6fc401, vol_name:cephfs) < "" Dec 5 05:12:04 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:04.453+0000 7f996f03a640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Dec 5 05:12:04 localhost ceph-mgr[286454]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Dec 5 05:12:04 localhost nova_compute[280228]: 2025-12-05 10:12:04.580 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:04 localhost podman[319058]: 2025-12-05 10:12:04.814159003 +0000 UTC m=+0.043666807 container kill 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:12:04 localhost dnsmasq[318551]: exiting on receipt of SIGTERM Dec 5 05:12:04 localhost systemd[1]: libpod-5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b.scope: Deactivated successfully. Dec 5 05:12:04 localhost podman[319070]: 2025-12-05 10:12:04.874549092 +0000 UTC m=+0.049374773 container died 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:12:04 localhost podman[319070]: 2025-12-05 10:12:04.956502531 +0000 UTC m=+0.131328192 container cleanup 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:04 localhost systemd[1]: libpod-conmon-5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b.scope: Deactivated successfully. Dec 5 05:12:04 localhost podman[319079]: 2025-12-05 10:12:04.986686126 +0000 UTC m=+0.149141447 container remove 5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13031b06-d963-40b3-b655-dc0aa50c254c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:05 localhost systemd[1]: var-lib-containers-storage-overlay-c316627e9d1552fbaea7a8a38e4fad71720bae0458a80dbb69c932ffe7102d44-merged.mount: Deactivated successfully. Dec 5 05:12:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bb49b3d9d26b429bdf4311eecddf5b47a7bd50f217eee77429686ff29c9771b-userdata-shm.mount: Deactivated successfully. Dec 5 05:12:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 21 KiB/s wr, 17 op/s Dec 5 05:12:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:12:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 9390 writes, 36K keys, 9390 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9390 writes, 2579 syncs, 3.64 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4519 writes, 15K keys, 4519 commit groups, 1.0 writes per commit group, ingest: 14.02 MB, 0.02 MB/s#012Interval WAL: 4519 writes, 1950 syncs, 2.32 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 05:12:05 localhost systemd[1]: run-netns-qdhcp\x2d13031b06\x2dd963\x2d40b3\x2db655\x2ddc0aa50c254c.mount: Deactivated successfully. Dec 5 05:12:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:05.359 261902 INFO neutron.agent.dhcp.agent [None req-5923268b-6eb0-4240-b29e-e00f49964754 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 5 05:12:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:de805df9-2757-4e33-8bfe-65fc6ef40510, vol_name:cephfs) < "" Dec 5 05:12:05 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:05.717 261902 INFO neutron.agent.linux.ip_lib [None req-c0878646-432a-4f64-80bd-fe66815408b0 - - - - - -] Device tap79e29b06-37 cannot be used as it has no MAC address#033[00m Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.737 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:05 localhost kernel: device tap79e29b06-37 entered promiscuous mode Dec 5 05:12:05 localhost NetworkManager[5960]: [1764929525.7466] manager: (tap79e29b06-37): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.746 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:05 localhost ovn_controller[153000]: 2025-12-05T10:12:05Z|00300|binding|INFO|Claiming lport 79e29b06-3797-4add-9c33-bd1ad7a3225e for this chassis. Dec 5 05:12:05 localhost ovn_controller[153000]: 2025-12-05T10:12:05Z|00301|binding|INFO|79e29b06-3797-4add-9c33-bd1ad7a3225e: Claiming unknown Dec 5 05:12:05 localhost systemd-udevd[319175]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:12:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:05.758 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-3f56da3e-d867-4469-bec2-d31d8ce2f0ea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f56da3e-d867-4469-bec2-d31d8ce2f0ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dbf7ec1-988a-43a6-99cf-75e280ec4429, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=79e29b06-3797-4add-9c33-bd1ad7a3225e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:05.759 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 79e29b06-3797-4add-9c33-bd1ad7a3225e in datapath 3f56da3e-d867-4469-bec2-d31d8ce2f0ea bound to our chassis#033[00m Dec 5 05:12:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:05.760 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3f56da3e-d867-4469-bec2-d31d8ce2f0ea or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.761 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:05.760 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[fad947e8-c001-4d00-b7c9-177f5d34159c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost ovn_controller[153000]: 2025-12-05T10:12:05Z|00302|binding|INFO|Setting lport 79e29b06-3797-4add-9c33-bd1ad7a3225e ovn-installed in OVS Dec 5 05:12:05 localhost ovn_controller[153000]: 2025-12-05T10:12:05Z|00303|binding|INFO|Setting lport 79e29b06-3797-4add-9c33-bd1ad7a3225e up in Southbound Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.780 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.782 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/de805df9-2757-4e33-8bfe-65fc6ef40510/.meta.tmp' Dec 5 05:12:05 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/de805df9-2757-4e33-8bfe-65fc6ef40510/.meta.tmp' to config b'/volumes/_nogroup/de805df9-2757-4e33-8bfe-65fc6ef40510/.meta' Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:de805df9-2757-4e33-8bfe-65fc6ef40510, vol_name:cephfs) < "" Dec 5 05:12:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "format": "json"}]: dispatch Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:de805df9-2757-4e33-8bfe-65fc6ef40510, vol_name:cephfs) < "" Dec 5 05:12:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:de805df9-2757-4e33-8bfe-65fc6ef40510, vol_name:cephfs) < "" Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:05 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.820 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:05 localhost journal[228791]: ethtool ioctl error on tap79e29b06-37: No such device Dec 5 05:12:05 localhost nova_compute[280228]: 2025-12-05 10:12:05.860 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:06 localhost podman[319249]: 2025-12-05 10:12:06.136339546 +0000 UTC m=+0.091835533 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_CLEAN=True, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:12:06 localhost podman[319249]: 2025-12-05 10:12:06.252898085 +0000 UTC m=+0.208394082 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 05:12:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:06.323 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:06 localhost podman[319377]: Dec 5 05:12:06 localhost podman[319377]: 2025-12-05 10:12:06.784874643 +0000 UTC m=+0.131634782 container create 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 5 05:12:06 localhost podman[319377]: 2025-12-05 10:12:06.738477332 +0000 UTC m=+0.085237481 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:12:06 localhost systemd[1]: Started libpod-conmon-7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2.scope. Dec 5 05:12:06 localhost systemd[1]: Started libcrun container. Dec 5 05:12:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de65c5bf3e634760b3faa2e1da013a895aee1d64580a79fc0306d9d8b460f56e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:12:06 localhost podman[319377]: 2025-12-05 10:12:06.882610775 +0000 UTC m=+0.229370934 container init 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:12:06 localhost podman[319377]: 2025-12-05 10:12:06.893187999 +0000 UTC m=+0.239948168 container start 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:06 localhost dnsmasq[319420]: started, version 2.85 cachesize 150 Dec 5 05:12:06 localhost dnsmasq[319420]: DNS service limited to local subnets Dec 5 05:12:06 localhost dnsmasq[319420]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:12:06 localhost dnsmasq[319420]: warning: no upstream servers configured Dec 5 05:12:06 localhost dnsmasq-dhcp[319420]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:12:06 localhost dnsmasq[319420]: read /var/lib/neutron/dhcp/3f56da3e-d867-4469-bec2-d31d8ce2f0ea/addn_hosts - 0 addresses Dec 5 05:12:06 localhost dnsmasq-dhcp[319420]: read /var/lib/neutron/dhcp/3f56da3e-d867-4469-bec2-d31d8ce2f0ea/host Dec 5 05:12:06 localhost dnsmasq-dhcp[319420]: read /var/lib/neutron/dhcp/3f56da3e-d867-4469-bec2-d31d8ce2f0ea/opts Dec 5 05:12:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:12:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:12:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:07.124 261902 INFO neutron.agent.dhcp.agent [None req-894efdd3-ba71-433b-83c2-2cd683e4096a - - - - - -] DHCP configuration for ports {'836b80d2-f36e-4667-8dc8-9d195f0c26c8'} is completed#033[00m Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ovn_controller[153000]: 2025-12-05T10:12:07Z|00304|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:12:07 localhost nova_compute[280228]: 2025-12-05 10:12:07.188 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:12:07 localhost dnsmasq[319420]: read /var/lib/neutron/dhcp/3f56da3e-d867-4469-bec2-d31d8ce2f0ea/addn_hosts - 0 addresses Dec 5 05:12:07 localhost dnsmasq-dhcp[319420]: read /var/lib/neutron/dhcp/3f56da3e-d867-4469-bec2-d31d8ce2f0ea/host Dec 5 05:12:07 localhost podman[319474]: 2025-12-05 10:12:07.258001779 +0000 UTC m=+0.046296329 container kill 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:12:07 localhost dnsmasq-dhcp[319420]: read /var/lib/neutron/dhcp/3f56da3e-d867-4469-bec2-d31d8ce2f0ea/opts Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 32 KiB/s wr, 19 op/s Dec 5 05:12:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:07.558 261902 INFO neutron.agent.dhcp.agent [None req-05edd03b-8140-4d10-a4e5-055aeb2fdea1 - - - - - -] DHCP configuration for ports {'836b80d2-f36e-4667-8dc8-9d195f0c26c8', '79e29b06-3797-4add-9c33-bd1ad7a3225e'} is completed#033[00m Dec 5 05:12:07 localhost ovn_controller[153000]: 2025-12-05T10:12:07Z|00305|binding|INFO|Removing iface tap79e29b06-37 ovn-installed in OVS Dec 5 05:12:07 localhost ovn_controller[153000]: 2025-12-05T10:12:07Z|00306|binding|INFO|Removing lport 79e29b06-3797-4add-9c33-bd1ad7a3225e ovn-installed in OVS Dec 5 05:12:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:07.672 158820 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 6e332fd7-d7a0-4916-9036-92ce547bbe84 with type ""#033[00m Dec 5 05:12:07 localhost nova_compute[280228]: 2025-12-05 10:12:07.673 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:07.675 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-3f56da3e-d867-4469-bec2-d31d8ce2f0ea', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f56da3e-d867-4469-bec2-d31d8ce2f0ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dbf7ec1-988a-43a6-99cf-75e280ec4429, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=79e29b06-3797-4add-9c33-bd1ad7a3225e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:07.678 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 79e29b06-3797-4add-9c33-bd1ad7a3225e in datapath 3f56da3e-d867-4469-bec2-d31d8ce2f0ea unbound from our chassis#033[00m Dec 5 05:12:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:07.680 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f56da3e-d867-4469-bec2-d31d8ce2f0ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:12:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:07.681 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[18ee854e-daf5-42b6-88c6-c9f58e9c1adf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:07 localhost nova_compute[280228]: 2025-12-05 10:12:07.681 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:07 localhost dnsmasq[319420]: exiting on receipt of SIGTERM Dec 5 05:12:07 localhost podman[319531]: 2025-12-05 10:12:07.790959467 +0000 UTC m=+0.048915759 container kill 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:07 localhost systemd[1]: libpod-7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2.scope: Deactivated successfully. Dec 5 05:12:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "tempest-cephx-id-1227377066", "tenant_id": "a1984fed702d4461879e97dd7c6fc401", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:12:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1227377066, format:json, prefix:fs subvolume authorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, tenant_id:a1984fed702d4461879e97dd7c6fc401, vol_name:cephfs) < "" Dec 5 05:12:07 localhost podman[319559]: 2025-12-05 10:12:07.862630232 +0000 UTC m=+0.053090457 container died 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} : dispatch Dec 5 05:12:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-1227377066 with tenant a1984fed702d4461879e97dd7c6fc401 Dec 5 05:12:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2-userdata-shm.mount: Deactivated successfully. Dec 5 05:12:07 localhost systemd[1]: var-lib-containers-storage-overlay-de65c5bf3e634760b3faa2e1da013a895aee1d64580a79fc0306d9d8b460f56e-merged.mount: Deactivated successfully. Dec 5 05:12:07 localhost podman[319559]: 2025-12-05 10:12:07.899689036 +0000 UTC m=+0.090149221 container remove 7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3f56da3e-d867-4469-bec2-d31d8ce2f0ea, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:12:07 localhost systemd[1]: libpod-conmon-7b665d9cd2ec4a34eecd62e515fa478f17dc4af3635edd83fa21c12d8ea8e4d2.scope: Deactivated successfully. Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:12:07 localhost nova_compute[280228]: 2025-12-05 10:12:07.949 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:12:07 localhost kernel: device tap79e29b06-37 left promiscuous mode Dec 5 05:12:07 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:12:07 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:12:07 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:12:07 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:12:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:12:07 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:07 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:07 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:07 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:12:07 localhost nova_compute[280228]: 2025-12-05 10:12:07.965 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:07.986 261902 INFO neutron.agent.dhcp.agent [None req-7603fbd6-8a67-4ff1-ab7f-ee44739b02b7 - - - - - -] Synchronizing state#033[00m Dec 5 05:12:07 localhost systemd[1]: run-netns-qdhcp\x2d3f56da3e\x2dd867\x2d4469\x2dbec2\x2dd31d8ce2f0ea.mount: Deactivated successfully. Dec 5 05:12:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1227377066, format:json, prefix:fs subvolume authorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, tenant_id:a1984fed702d4461879e97dd7c6fc401, vol_name:cephfs) < "" Dec 5 05:12:08 localhost nova_compute[280228]: 2025-12-05 10:12:08.160 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:08.166 261902 INFO neutron.agent.dhcp.agent [None req-2d3d5db9-dea0-423a-91bb-477d1d215997 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 5 05:12:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:08.167 261902 INFO neutron.agent.dhcp.agent [-] Starting network 3f56da3e-d867-4469-bec2-d31d8ce2f0ea dhcp configuration#033[00m Dec 5 05:12:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:08.171 261902 INFO neutron.agent.dhcp.agent [-] Starting network 72a85b2b-d9cc-481f-9b54-10f29a49f3c0 dhcp configuration#033[00m Dec 5 05:12:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:08.171 261902 INFO neutron.agent.dhcp.agent [-] Finished network 72a85b2b-d9cc-481f-9b54-10f29a49f3c0 dhcp configuration#033[00m Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:12:08 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:12:08 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:08 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:08 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 31e25979-e895-40fa-9e9d-c2715ee1bd10 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:12:08 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 31e25979-e895-40fa-9e9d-c2715ee1bd10 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:12:08 localhost ceph-mgr[286454]: [progress INFO root] Completed event 31e25979-e895-40fa-9e9d-c2715ee1bd10 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e156 do_prune osdmap full prune enabled Dec 5 05:12:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e157 e157: 6 total, 6 up, 6 in Dec 5 05:12:08 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e157: 6 total, 6 up, 6 in Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1227377066", "caps": ["mds", "allow rw path=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33", "osd", "allow rw pool=manila_data namespace=fsvolumens_7ec11635-5c27-465d-8a70-06bc2f1e99f2", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:12:08 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:12:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 13K writes, 4064 syncs, 3.28 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7215 writes, 22K keys, 7215 commit groups, 1.0 writes per commit group, ingest: 19.69 MB, 0.03 MB/s#012Interval WAL: 7215 writes, 3116 syncs, 2.32 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 05:12:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:09.103 2 INFO neutron.agent.securitygroups_rpc [None req-3f77c6ad-acb0-48a9-aac0-03c96698ac64 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 193 MiB data, 887 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 32 KiB/s wr, 22 op/s Dec 5 05:12:09 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:12:09 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:12:09 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:09 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:09 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:12:09 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:12:09 localhost nova_compute[280228]: 2025-12-05 10:12:09.707 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, vol_name:cephfs) < "" Dec 5 05:12:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56639cdb-9f87-40b6-91d1-c10cab7d966b/.meta.tmp' Dec 5 05:12:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56639cdb-9f87-40b6-91d1-c10cab7d966b/.meta.tmp' to config b'/volumes/_nogroup/56639cdb-9f87-40b6-91d1-c10cab7d966b/.meta' Dec 5 05:12:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, vol_name:cephfs) < "" Dec 5 05:12:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "format": "json"}]: dispatch Dec 5 05:12:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, vol_name:cephfs) < "" Dec 5 05:12:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:09.822 261902 INFO neutron.agent.dhcp.agent [None req-970e3be1-23d7-4b4e-b890-6f9e2f68c2de - - - - - -] Finished network 3f56da3e-d867-4469-bec2-d31d8ce2f0ea dhcp configuration#033[00m Dec 5 05:12:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:09.823 261902 INFO neutron.agent.dhcp.agent [None req-2d3d5db9-dea0-423a-91bb-477d1d215997 - - - - - -] Synchronizing state complete#033[00m Dec 5 05:12:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:09.824 261902 INFO neutron.agent.dhcp.agent [None req-2d3d5db9-dea0-423a-91bb-477d1d215997 - - - - - -] Synchronizing state#033[00m Dec 5 05:12:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, vol_name:cephfs) < "" Dec 5 05:12:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:09.910 261902 INFO neutron.agent.dhcp.agent [None req-0a5136d0-4e83-4277-8901-1f247140c68e - - - - - -] DHCP configuration for ports {'836b80d2-f36e-4667-8dc8-9d195f0c26c8'} is completed#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.015 261902 INFO neutron.agent.dhcp.agent [None req-66b66bbd-9767-4332-a896-d82aa1781f6c - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.016 261902 INFO neutron.agent.dhcp.agent [-] Starting network 3f56da3e-d867-4469-bec2-d31d8ce2f0ea dhcp configuration#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.017 261902 INFO neutron.agent.dhcp.agent [-] Finished network 3f56da3e-d867-4469-bec2-d31d8ce2f0ea dhcp configuration#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.017 261902 INFO neutron.agent.dhcp.agent [-] Starting network 72a85b2b-d9cc-481f-9b54-10f29a49f3c0 dhcp configuration#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.017 261902 INFO neutron.agent.dhcp.agent [-] Finished network 72a85b2b-d9cc-481f-9b54-10f29a49f3c0 dhcp configuration#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.017 261902 INFO neutron.agent.dhcp.agent [None req-66b66bbd-9767-4332-a896-d82aa1781f6c - - - - - -] Synchronizing state complete#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.157 261902 INFO neutron.agent.dhcp.agent [None req-dc626640-ee2c-4c0c-95da-0dba13ca64c6 - - - - - -] DHCP configuration for ports {'836b80d2-f36e-4667-8dc8-9d195f0c26c8'} is completed#033[00m Dec 5 05:12:10 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:12:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:12:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e157 do_prune osdmap full prune enabled Dec 5 05:12:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:12:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e158 e158: 6 total, 6 up, 6 in Dec 5 05:12:10 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e158: 6 total, 6 up, 6 in Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.574 261902 INFO neutron.agent.dhcp.agent [None req-16a5e3f4-db19-48ab-b5ab-2d93229b5575 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:10.575 261902 INFO neutron.agent.dhcp.agent [None req-16a5e3f4-db19-48ab-b5ab-2d93229b5575 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:11 localhost ovn_controller[153000]: 2025-12-05T10:12:11Z|00307|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:12:11 localhost nova_compute[280228]: 2025-12-05 10:12:11.033 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "Joe", "format": "json"}]: dispatch Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '7ec11635-5c27-465d-8a70-06bc2f1e99f2' Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "Joe", "format": "json"}]: dispatch Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33 Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:12:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:11 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:11.190 2 INFO neutron.agent.securitygroups_rpc [None req-07ea4a38-e98a-42b4-8586-d7ec7fae7988 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:11 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:11.198 2 INFO neutron.agent.securitygroups_rpc [None req-74116218-c595-488c-8172-3668fd713ccc 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 193 MiB data, 888 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 41 KiB/s wr, 45 op/s Dec 5 05:12:11 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:11.642 2 INFO neutron.agent.securitygroups_rpc [None req-8fce75c8-7ad5-4ffa-9ef1-d6fce74a08c3 859f234eba4c442983333d06bc12b112 0d15dccf4c864d558d055b0c7cd1cccc - - default default] Security group member updated ['6262f27b-ae7f-4862-a034-43ed1f313c2e']#033[00m Dec 5 05:12:11 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:11.661 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e158 do_prune osdmap full prune enabled Dec 5 05:12:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e159 e159: 6 total, 6 up, 6 in Dec 5 05:12:11 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e159: 6 total, 6 up, 6 in Dec 5 05:12:12 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:12.250 2 INFO neutron.agent.securitygroups_rpc [None req-4b26bab8-a657-4bed-8b9b-bee7300fbeb3 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:12 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:12.254 2 INFO neutron.agent.securitygroups_rpc [None req-ccc77b1e-6cab-481a-912a-ed42ab93e56d 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.952 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.953 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:12:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e159 do_prune osdmap full prune enabled Dec 5 05:12:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e160 e160: 6 total, 6 up, 6 in Dec 5 05:12:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e160: 6 total, 6 up, 6 in Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.991 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.992 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9f0519e-8c74-4a96-9581-bc3bcc0a53f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:12.954156', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de2e4234-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '25ed8924f60c41c65f7519b5161a70834493685a0735d0a0a7009b0165adfe68'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:12.954156', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de2e572e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': 'e268edea43b1a7aead78c98288157a88d0cc14ecddd1d8e748b31ec732b34679'}]}, 'timestamp': '2025-12-05 10:12:12.992720', '_unique_id': 'a433a1d8a2cc49a5a5444fe8469db712'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.994 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:12.995 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8730102f-8d52-4925-a26e-524a1acc2cbf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:12.995649', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de2f8ebe-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '9d4a00738c9d7b4e60fdecd8b2041c3c3637f48eb7ab9c0b32033987a430dbd2'}]}, 'timestamp': '2025-12-05 10:12:13.000730', '_unique_id': '0fdef253652f4c00bbdbcbeee359ed40'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.001 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.003 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.003 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.003 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf1514a6-cfea-4c9e-b2b3-04f42964b9aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.003524', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de300eb6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '2cd214766ee27a28c78d73a38618603910836ba5d244dcd1e0df5696f1b7ddf6'}]}, 'timestamp': '2025-12-05 10:12:13.003968', '_unique_id': '430cfa79274b41bc8a74cb0ec9452219'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.006 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.006 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.006 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '55d63e2f-7103-46ef-b4f6-7b4ac0562585', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.006184', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de307a04-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': 'b53a9baa5ff3f447368cc992554a4aa5927d72cbf842a0133a58f4ef26305242'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.006184', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de3089d6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': 'f2909c37db661ba490eb2ab78f5f7e2ad0614767a5abd93d922747e72311c8b0'}]}, 'timestamp': '2025-12-05 10:12:13.007142', '_unique_id': '03a040b8cb47479aafb13acef0e0cddc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.008 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.009 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.009 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac33c800-f9ce-462d-b73b-4169ac37d824', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.009397', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de30f506-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '790730c759892afbfa39e0010c869145c852cd644de1d2313a913b905cf208bf'}]}, 'timestamp': '2025-12-05 10:12:13.009895', '_unique_id': '32e82c528a4446efb7a0217ee7da8752'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.011 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.011 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3c528890-2383-4c62-835b-9f0d9a37984d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.011522', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de314452-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '70d70f6e9441b44eb5919aaf74cfa08f6f21808ea1b632572499dbf8d83dac30'}]}, 'timestamp': '2025-12-05 10:12:13.011831', '_unique_id': 'd3b98136c1de4e53a1f40e733af1dd5b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.013 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8e9cbb5c-1c03-4e51-9530-2b60dc486069', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.013686', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de3199d4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '6be633c8483d93ee5bc6e53a9b0801b4b0e5e35ff04fb51cc6c9eee2745f106b'}]}, 'timestamp': '2025-12-05 10:12:13.014015', '_unique_id': '7a5802722c3a4fde9f1bb7b998d90f3e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.014 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.015 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.015 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.015 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '08e3236f-6ebf-4158-b3ea-2d1f5dcf8286', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.015711', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de31e826-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': 'e9057b6a69bae5a5c3026a530ba90a207b00f3d23ead4293eaf5853ef09de79e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.015711', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de31f1cc-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '9fa104e1a3cf0dedee5faf8938f75f61b8358886f3d449fef4d5361663660542'}]}, 'timestamp': '2025-12-05 10:12:13.016227', '_unique_id': '7ed00db1fea54a42a350ed8951acaff2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.017 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.028 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a8ac7aca-ef71-4963-b0d4-9fa10c327826', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.017832', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de33f044-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.192135678, 'message_signature': '5546545b9ffced7beccaa451842f01450fa3fce9edad4f2d1591a5000232fd3a'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.017832', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de3403fe-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.192135678, 'message_signature': '08bb9a3f2416826c1eaf65d777c4cbf006ef5e79ae9619dc07134c8029c8e8d5'}]}, 'timestamp': '2025-12-05 10:12:13.029893', '_unique_id': '8651b451614849519ba486e8d3e37774'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 17600000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ebf3f989-2693-44ea-9574-ba7bbd296f4c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 17600000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:12:13.032397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'de379ab4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.227021315, 'message_signature': 'f1268c3cf31b7dfef3bf5b13ca847509658a2a83136b11fd266192eb3978b0c4'}]}, 'timestamp': '2025-12-05 10:12:13.053556', '_unique_id': 'e498c6932cd04ea0a03cd351cb1ffd9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.054 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.056 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.056 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b465763a-974c-4cf8-8522-7b0e99e602cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.056545', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de3827e0-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '042bcb3541cc41a6a53ac9704f62f393809477a02bb929a6cc0f8aaf4dc34f55'}]}, 'timestamp': '2025-12-05 10:12:13.057070', '_unique_id': '234d7fd389544f8a9eb2af2ab37f878e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.059 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9c65bac2-5bcf-4320-b3e6-d184b47a92d1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.060022', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de38b016-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '16959779fb14811ba489470ca6205a7caf9e6aaa4fb17d0a2a7afc85e1b57b59'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.060022', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de38c1e6-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '0f21095c2459f27db66d33edc071e72d9bef241fc53ee542b99dc2bfe0d15e63'}]}, 'timestamp': '2025-12-05 10:12:13.061053', '_unique_id': '1c1dd829e1b8466f8ebd51537b6fa5f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.064 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2320c693-8ea9-4913-9163-7fb218af3452', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.064173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de39549e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.192135678, 'message_signature': 'a7bacb40efc2b7e9702cc816914ace38dd4cbc23f390bbf944f299062723de31'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.064173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de39665a-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.192135678, 'message_signature': 'f39faace456fc0a1ebfb00b6bf6a1502a8348d7e2e452205c6f32872e61b7e05'}]}, 'timestamp': '2025-12-05 10:12:13.065178', '_unique_id': '22c637a8d6514c90b754b3f98477b759'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.067 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a973f2d5-efc2-4c40-b2e3-65b85b0c7fac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.067932', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de39e440-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '8b81a525058f24e8a58d6ff24198f87115c621e539c9bc27fdb70830fa3b0822'}]}, 'timestamp': '2025-12-05 10:12:13.068524', '_unique_id': '05d819c8161f42229f89efe4c0d47390'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ffd51f29-c610-4a2f-b329-2930441caf7a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.071679', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de3a7784-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': 'be611036ac0244102ec6fadf7958beb5023786dae45d1e498a699dc48f2cc13d'}]}, 'timestamp': '2025-12-05 10:12:13.072208', '_unique_id': '20c32182b12040f687e4a2682f7991dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.074 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c8ca06d7-dc54-4f0b-ab31-c2a73e54ea98', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:12:13.074710', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'de3aed2c-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.227021315, 'message_signature': 'aeca2d17a8943acf26b69253a41700fba60751f836a187e2b65d553e0897879f'}]}, 'timestamp': '2025-12-05 10:12:13.075197', '_unique_id': 'a773615c2a764abf8c80d09b0ddd6230'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.077 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.078 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.078 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '201e9c54-d1ae-4fc6-922e-e497b0ebe630', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.078143', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de3b7332-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '223fc53e993022c1b306f40e426055cadda22c27baa5c086996e729ebdd96693'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.078143', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de3b835e-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '7811d8c95f2f6c538aa9d273d79f45b671f30e8de3a8b7cf418c7c37d4585ef9'}]}, 'timestamp': '2025-12-05 10:12:13.079020', '_unique_id': '1ec50c9e26de41aa8287264cab8376ef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.081 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.082 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.082 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da05913c-b3d1-40a8-a48a-dd28ce3be9ac', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.082139', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de3c1292-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.192135678, 'message_signature': 'dfa284394d8851c5d8a88bf32acec0946b784cac7ecb54f54cd357e95c1cc17d'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.082139', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de3c2372-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.192135678, 'message_signature': 'b2c2126e2c3174101ce47ec7614cc0fffc9dc6ed45a7638e2d7b88cd7fa5bc84'}]}, 'timestamp': '2025-12-05 10:12:13.083122', '_unique_id': 'e2494e50b6f24690bc314b7cb96ba988'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.084 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.085 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.085 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.085 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.086 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7d4427c2-128d-463c-96f6-1694848f7f6f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:12:13.085593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'de3c9514-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': 'dd99f32ad8c9b89f2d5a1f80a5b88f1354fd56cfe21e348ef60f8802f26032a6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:12:13.085593', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'de3cbdb4-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.128464688, 'message_signature': '15a0e257b7d20fdcc6394c197ff095638e37d817cfe0194d46ec50e3ae0b9ae7'}]}, 'timestamp': '2025-12-05 10:12:13.087084', '_unique_id': '6e0cd3b7c7f64bb49811f716df39e6bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.088 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.089 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.089 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a36b6c50-33bf-4382-af57-22c3e9a549cd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.089461', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de3d303c-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '70e13bb4058a7f0ebfe6c8db25c85a01edfe3a00f675e7fff179380b4610ce0a'}]}, 'timestamp': '2025-12-05 10:12:13.090106', '_unique_id': '0614379a715b475a9967d6b289093c9d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.091 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.092 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.092 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '10db0b86-f5ba-4570-8009-cb0e40d6829a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:12:13.092834', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'de3db192-d1c2-11f0-8ba6-fa163e982365', 'monotonic_time': 12648.169980449, 'message_signature': '9923777055b0703ce97e5f7bc524e953d695111413232793155c5ba41379292c'}]}, 'timestamp': '2025-12-05 10:12:13.093425', '_unique_id': '5a4815c632f746b689da89df93754603'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:12:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:12:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:12:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "format": "json"}]: dispatch Dec 5 05:12:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:de805df9-2757-4e33-8bfe-65fc6ef40510, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:de805df9-2757-4e33-8bfe-65fc6ef40510, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:13 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:13.108+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'de805df9-2757-4e33-8bfe-65fc6ef40510' of type subvolume Dec 5 05:12:13 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'de805df9-2757-4e33-8bfe-65fc6ef40510' of type subvolume Dec 5 05:12:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "de805df9-2757-4e33-8bfe-65fc6ef40510", "force": true, "format": "json"}]: dispatch Dec 5 05:12:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:de805df9-2757-4e33-8bfe-65fc6ef40510, vol_name:cephfs) < "" Dec 5 05:12:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/de805df9-2757-4e33-8bfe-65fc6ef40510'' moved to trashcan Dec 5 05:12:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:de805df9-2757-4e33-8bfe-65fc6ef40510, vol_name:cephfs) < "" Dec 5 05:12:13 localhost nova_compute[280228]: 2025-12-05 10:12:13.162 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 193 MiB data, 888 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 38 KiB/s wr, 68 op/s Dec 5 05:12:13 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:13.528 2 INFO neutron.agent.securitygroups_rpc [None req-573803be-a4fe-451f-b3d4-e9e26fafbcb6 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:14 localhost nova_compute[280228]: 2025-12-05 10:12:14.709 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "tempest-cephx-id-1227377066", "format": "json"}]: dispatch Dec 5 05:12:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1227377066, format:json, prefix:fs subvolume deauthorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} v 0) Dec 5 05:12:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} : dispatch Dec 5 05:12:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"} v 0) Dec 5 05:12:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"} : dispatch Dec 5 05:12:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"}]': finished Dec 5 05:12:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1227377066, format:json, prefix:fs subvolume deauthorize, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "auth_id": "tempest-cephx-id-1227377066", "format": "json"}]: dispatch Dec 5 05:12:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1227377066, format:json, prefix:fs subvolume evict, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1227377066, client_metadata.root=/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2/2445c2dd-0554-4d30-92cc-44fd04cf4a33 Dec 5 05:12:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:12:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1985756166' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:12:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:12:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1227377066, format:json, prefix:fs subvolume evict, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:12:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1985756166' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:12:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e160 do_prune osdmap full prune enabled Dec 5 05:12:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e161 e161: 6 total, 6 up, 6 in Dec 5 05:12:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e161: 6 total, 6 up, 6 in Dec 5 05:12:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1227377066", "format": "json"} : dispatch Dec 5 05:12:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"} : dispatch Dec 5 05:12:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1227377066"}]': finished Dec 5 05:12:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:12:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:12:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:12:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:12:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:12:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:12:15 localhost ovn_controller[153000]: 2025-12-05T10:12:15Z|00308|binding|INFO|Removing iface tapa5983d60-4e ovn-installed in OVS Dec 5 05:12:15 localhost ovn_controller[153000]: 2025-12-05T10:12:15Z|00309|binding|INFO|Removing lport a5983d60-4ea5-4e0b-a307-69fc1aa2bab7 ovn-installed in OVS Dec 5 05:12:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:15.280 158820 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b8c23a09-6105-4242-a7f7-6cb862969680 with type ""#033[00m Dec 5 05:12:15 localhost nova_compute[280228]: 2025-12-05 10:12:15.281 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:15.282 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-36586aea-918e-4915-9f79-dcb64e06aa29', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36586aea-918e-4915-9f79-dcb64e06aa29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd75aa877d484a7090a001691a2a520b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2376ed04-205a-4369-b306-aac7f14cb92e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a5983d60-4ea5-4e0b-a307-69fc1aa2bab7) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:15 localhost nova_compute[280228]: 2025-12-05 10:12:15.288 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:15.286 158820 INFO neutron.agent.ovn.metadata.agent [-] Port a5983d60-4ea5-4e0b-a307-69fc1aa2bab7 in datapath 36586aea-918e-4915-9f79-dcb64e06aa29 unbound from our chassis#033[00m Dec 5 05:12:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:15.288 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 36586aea-918e-4915-9f79-dcb64e06aa29 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:12:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:15.290 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[db6d9d18-b186-43a4-91a5-bde5e8e3986c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:15 localhost dnsmasq[317736]: exiting on receipt of SIGTERM Dec 5 05:12:15 localhost podman[319626]: 2025-12-05 10:12:15.293763016 +0000 UTC m=+0.065804425 container kill b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:15 localhost systemd[1]: libpod-b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0.scope: Deactivated successfully. Dec 5 05:12:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 193 MiB data, 888 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 40 KiB/s wr, 71 op/s Dec 5 05:12:15 localhost podman[319639]: 2025-12-05 10:12:15.368488825 +0000 UTC m=+0.061319809 container died b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:15 localhost systemd[1]: tmp-crun.pYkeCt.mount: Deactivated successfully. Dec 5 05:12:15 localhost podman[319639]: 2025-12-05 10:12:15.405948781 +0000 UTC m=+0.098779685 container cleanup b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:15 localhost systemd[1]: libpod-conmon-b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0.scope: Deactivated successfully. Dec 5 05:12:15 localhost podman[319641]: 2025-12-05 10:12:15.442400707 +0000 UTC m=+0.126912566 container remove b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36586aea-918e-4915-9f79-dcb64e06aa29, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:15 localhost kernel: device tapa5983d60-4e left promiscuous mode Dec 5 05:12:15 localhost nova_compute[280228]: 2025-12-05 10:12:15.455 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:15 localhost nova_compute[280228]: 2025-12-05 10:12:15.472 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:15.503 261902 INFO neutron.agent.dhcp.agent [None req-4874d738-83b9-43ff-8e01-42face7f6adf - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:15.504 261902 INFO neutron.agent.dhcp.agent [None req-4874d738-83b9-43ff-8e01-42face7f6adf - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:15 localhost nova_compute[280228]: 2025-12-05 10:12:15.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:15 localhost ovn_controller[153000]: 2025-12-05T10:12:15Z|00310|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:12:15 localhost nova_compute[280228]: 2025-12-05 10:12:15.609 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:12:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:12:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:12:16 localhost podman[319672]: 2025-12-05 10:12:16.192622248 +0000 UTC m=+0.069553831 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:16 localhost podman[319672]: 2025-12-05 10:12:16.203074608 +0000 UTC m=+0.080006221 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:12:16 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:12:16 localhost podman[319670]: 2025-12-05 10:12:16.220214163 +0000 UTC m=+0.103590253 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:12:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "format": "json"}]: dispatch Dec 5 05:12:16 localhost podman[319671]: 2025-12-05 10:12:16.263229589 +0000 UTC m=+0.141939206 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:12:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:16 localhost podman[319671]: 2025-12-05 10:12:16.267408878 +0000 UTC m=+0.146118505 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent) Dec 5 05:12:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:16 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56639cdb-9f87-40b6-91d1-c10cab7d966b' of type subvolume Dec 5 05:12:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:16.277+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56639cdb-9f87-40b6-91d1-c10cab7d966b' of type subvolume Dec 5 05:12:16 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:12:16 localhost podman[319670]: 2025-12-05 10:12:16.281697855 +0000 UTC m=+0.165073915 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:12:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56639cdb-9f87-40b6-91d1-c10cab7d966b", "force": true, "format": "json"}]: dispatch Dec 5 05:12:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, vol_name:cephfs) < "" Dec 5 05:12:16 localhost systemd[1]: tmp-crun.Gf2EYq.mount: Deactivated successfully. Dec 5 05:12:16 localhost systemd[1]: var-lib-containers-storage-overlay-6eedfc4133a81f3e77b73f47c6847b022359c68919d9c1afc35cd1b80183906c-merged.mount: Deactivated successfully. Dec 5 05:12:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b744f3b3fefb2f9fbd701529ef107dd9bac43bfde247912e8fe432c19d5947f0-userdata-shm.mount: Deactivated successfully. Dec 5 05:12:16 localhost systemd[1]: run-netns-qdhcp\x2d36586aea\x2d918e\x2d4915\x2d9f79\x2ddcb64e06aa29.mount: Deactivated successfully. Dec 5 05:12:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/56639cdb-9f87-40b6-91d1-c10cab7d966b'' moved to trashcan Dec 5 05:12:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56639cdb-9f87-40b6-91d1-c10cab7d966b, vol_name:cephfs) < "" Dec 5 05:12:16 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:12:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e161 do_prune osdmap full prune enabled Dec 5 05:12:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e162 e162: 6 total, 6 up, 6 in Dec 5 05:12:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e162: 6 total, 6 up, 6 in Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.069674) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537069739, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1302, "num_deletes": 256, "total_data_size": 1196820, "memory_usage": 1222880, "flush_reason": "Manual Compaction"} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537081389, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1173458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29676, "largest_seqno": 30977, "table_properties": {"data_size": 1167345, "index_size": 3327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14809, "raw_average_key_size": 21, "raw_value_size": 1154518, "raw_average_value_size": 1702, "num_data_blocks": 140, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929470, "oldest_key_time": 1764929470, "file_creation_time": 1764929537, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 11771 microseconds, and 5331 cpu microseconds. Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.081450) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1173458 bytes OK Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.081478) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084189) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084216) EVENT_LOG_v1 {"time_micros": 1764929537084207, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084267) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1190702, prev total WAL file size 1190702, number of live WAL files 2. Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1145KB)], [51(16MB)] Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537085044, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 18984225, "oldest_snapshot_seqno": -1} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 12749 keys, 17763467 bytes, temperature: kUnknown Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537173938, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 17763467, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17689517, "index_size": 40953, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 341619, "raw_average_key_size": 26, "raw_value_size": 17471382, "raw_average_value_size": 1370, "num_data_blocks": 1550, "num_entries": 12749, "num_filter_entries": 12749, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929537, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.174454) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 17763467 bytes Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.176491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.3 rd, 199.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 17.0 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(31.3) write-amplify(15.1) OK, records in: 13288, records dropped: 539 output_compression: NoCompression Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.176514) EVENT_LOG_v1 {"time_micros": 1764929537176503, "job": 30, "event": "compaction_finished", "compaction_time_micros": 89016, "compaction_time_cpu_micros": 43478, "output_level": 6, "num_output_files": 1, "total_output_size": 17763467, "num_input_records": 13288, "num_output_records": 12749, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537177054, "job": 30, "event": "table_file_deletion", "file_number": 53} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929537179619, "job": 30, "event": "table_file_deletion", "file_number": 51} Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.084877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.179790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.179800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.179803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.179806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:12:17 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:12:17.179810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:12:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 193 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 45 KiB/s wr, 111 op/s Dec 5 05:12:17 localhost nova_compute[280228]: 2025-12-05 10:12:17.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:17.519 2 INFO neutron.agent.securitygroups_rpc [None req-cd9bae7f-808b-456c-bf8e-f31a82155a0f 859f234eba4c442983333d06bc12b112 0d15dccf4c864d558d055b0c7cd1cccc - - default default] Security group member updated ['6262f27b-ae7f-4862-a034-43ed1f313c2e']#033[00m Dec 5 05:12:17 localhost nova_compute[280228]: 2025-12-05 10:12:17.593 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:12:17 localhost nova_compute[280228]: 2025-12-05 10:12:17.594 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:12:17 localhost nova_compute[280228]: 2025-12-05 10:12:17.594 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:12:17 localhost nova_compute[280228]: 2025-12-05 10:12:17.594 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:12:17 localhost nova_compute[280228]: 2025-12-05 10:12:17.595 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:12:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:12:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2431482962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.074 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.163 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.164 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.212 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "auth_id": "Joe", "format": "json"}]: dispatch Dec 5 05:12:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:12:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Dec 5 05:12:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 5 05:12:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Dec 5 05:12:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 5 05:12:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.383 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.384 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11160MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.384 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.385 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:12:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:12:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "auth_id": "Joe", "format": "json"}]: dispatch Dec 5 05:12:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:12:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537/ca912719-4ded-4fbc-bc88-8b50dbf8a797 Dec 5 05:12:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:12:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.457 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.457 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.457 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.494 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:12:18 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:18.876 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8:0:1:f816:3eff:fedc:5008'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=84b58e4d-217c-42eb-90cf-7b7b22ca7084, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8f4f2914-c39b-4b1e-a2e4-5073c675e53f) old=Port_Binding(mac=['fa:16:3e:dc:50:08 2001:db8::f816:3eff:fedc:5008'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedc:5008/64', 'neutron:device_id': 'ovnmeta-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bfb3d96-9ce0-4e33-9462-530d609ec69d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d6c9392a40c4bcc824eba6a30de937f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:18 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:18.878 158820 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8f4f2914-c39b-4b1e-a2e4-5073c675e53f in datapath 0bfb3d96-9ce0-4e33-9462-530d609ec69d updated#033[00m Dec 5 05:12:18 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:18.881 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bfb3d96-9ce0-4e33-9462-530d609ec69d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:12:18 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:18.882 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[fe745941-0f77-4964-a577-414c6ab99160]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:12:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1601322822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.926 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.932 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.954 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.956 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:12:18 localhost nova_compute[280228]: 2025-12-05 10:12:18.956 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:12:19 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 5 05:12:19 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 5 05:12:19 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Dec 5 05:12:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 193 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 38 KiB/s wr, 94 op/s Dec 5 05:12:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e162 do_prune osdmap full prune enabled Dec 5 05:12:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e163 e163: 6 total, 6 up, 6 in Dec 5 05:12:19 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e163: 6 total, 6 up, 6 in Dec 5 05:12:19 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:19.638 2 INFO neutron.agent.securitygroups_rpc [None req-20e10a9c-3ee9-4c67-b462-5bcd13a7c5f8 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:19 localhost nova_compute[280228]: 2025-12-05 10:12:19.712 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:19 localhost podman[239519]: time="2025-12-05T10:12:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:12:19 localhost podman[239519]: @ - - [05/Dec/2025:10:12:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:12:19 localhost podman[239519]: @ - - [05/Dec/2025:10:12:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19262 "" "Go-http-client/1.1" Dec 5 05:12:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e163 do_prune osdmap full prune enabled Dec 5 05:12:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e164 e164: 6 total, 6 up, 6 in Dec 5 05:12:20 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 5 05:12:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e164: 6 total, 6 up, 6 in Dec 5 05:12:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:20.909 2 INFO neutron.agent.securitygroups_rpc [None req-25e7fcf2-d6a4-4a73-97e0-30de885c71b5 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:21 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:21.009 2 INFO neutron.agent.securitygroups_rpc [None req-6f7c4e29-23fb-4261-b44a-d1c166858ed2 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 194 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 79 KiB/s wr, 153 op/s Dec 5 05:12:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "admin", "tenant_id": "f4c34f38ddb048808ef72391bdda40b5", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:12:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, tenant_id:f4c34f38ddb048808ef72391bdda40b5, vol_name:cephfs) < "" Dec 5 05:12:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Dec 5 05:12:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Dec 5 05:12:21 localhost ceph-mgr[286454]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Dec 5 05:12:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, tenant_id:f4c34f38ddb048808ef72391bdda40b5, vol_name:cephfs) < "" Dec 5 05:12:21 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:21.946+0000 7f996f03a640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Dec 5 05:12:21 localhost ceph-mgr[286454]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Dec 5 05:12:22 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Dec 5 05:12:23 localhost nova_compute[280228]: 2025-12-05 10:12:23.241 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 194 MiB data, 929 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 38 KiB/s wr, 52 op/s Dec 5 05:12:23 localhost nova_compute[280228]: 2025-12-05 10:12:23.960 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:23 localhost nova_compute[280228]: 2025-12-05 10:12:23.960 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:12:23 localhost nova_compute[280228]: 2025-12-05 10:12:23.961 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:12:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:12:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:12:24 localhost nova_compute[280228]: 2025-12-05 10:12:24.138 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:12:24 localhost nova_compute[280228]: 2025-12-05 10:12:24.138 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:12:24 localhost nova_compute[280228]: 2025-12-05 10:12:24.138 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:12:24 localhost nova_compute[280228]: 2025-12-05 10:12:24.139 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:12:24 localhost systemd[1]: tmp-crun.fDRA5r.mount: Deactivated successfully. Dec 5 05:12:24 localhost podman[319773]: 2025-12-05 10:12:24.220405483 +0000 UTC m=+0.098993903 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 5 05:12:24 localhost podman[319774]: 2025-12-05 10:12:24.318580098 +0000 UTC m=+0.194446285 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:12:24 localhost podman[319774]: 2025-12-05 10:12:24.325071587 +0000 UTC m=+0.200937794 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:12:24 localhost podman[319773]: 2025-12-05 10:12:24.338660853 +0000 UTC m=+0.217249253 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:12:24 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:12:24 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:12:24 localhost nova_compute[280228]: 2025-12-05 10:12:24.715 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:25 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "david", "tenant_id": "f4c34f38ddb048808ef72391bdda40b5", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:12:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, tenant_id:f4c34f38ddb048808ef72391bdda40b5, vol_name:cephfs) < "" Dec 5 05:12:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Dec 5 05:12:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 5 05:12:25 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID david with tenant f4c34f38ddb048808ef72391bdda40b5 Dec 5 05:12:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:12:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:12:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:12:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 5 05:12:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:12:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, tenant_id:f4c34f38ddb048808ef72391bdda40b5, vol_name:cephfs) < "" Dec 5 05:12:25 localhost systemd[1]: tmp-crun.wgs5iQ.mount: Deactivated successfully. Dec 5 05:12:25 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:25.215 261902 INFO neutron.agent.linux.ip_lib [None req-0d81356f-d2e9-4455-a214-28eb6d525448 - - - - - -] Device tap56fe1e4a-4e cannot be used as it has no MAC address#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.238 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:25 localhost kernel: device tap56fe1e4a-4e entered promiscuous mode Dec 5 05:12:25 localhost NetworkManager[5960]: [1764929545.2456] manager: (tap56fe1e4a-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.246 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:25 localhost ovn_controller[153000]: 2025-12-05T10:12:25Z|00311|binding|INFO|Claiming lport 56fe1e4a-4e6b-47aa-b78f-5610450884d1 for this chassis. Dec 5 05:12:25 localhost ovn_controller[153000]: 2025-12-05T10:12:25Z|00312|binding|INFO|56fe1e4a-4e6b-47aa-b78f-5610450884d1: Claiming unknown Dec 5 05:12:25 localhost systemd-udevd[319831]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:12:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:25.257 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-f90b5114-36d3-42d4-a5d4-f8518886af23', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f90b5114-36d3-42d4-a5d4-f8518886af23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f551b37-84d0-4fe1-872d-7d48d9fc152d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=56fe1e4a-4e6b-47aa-b78f-5610450884d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:25.258 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 56fe1e4a-4e6b-47aa-b78f-5610450884d1 in datapath f90b5114-36d3-42d4-a5d4-f8518886af23 bound to our chassis#033[00m Dec 5 05:12:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:25.259 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f90b5114-36d3-42d4-a5d4-f8518886af23 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:12:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:25.260 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[e84c30a5-b15a-4e44-a2d4-3d3d95791eb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:25 localhost ovn_controller[153000]: 2025-12-05T10:12:25Z|00313|binding|INFO|Setting lport 56fe1e4a-4e6b-47aa-b78f-5610450884d1 ovn-installed in OVS Dec 5 05:12:25 localhost ovn_controller[153000]: 2025-12-05T10:12:25Z|00314|binding|INFO|Setting lport 56fe1e4a-4e6b-47aa-b78f-5610450884d1 up in Southbound Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.268 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.298 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 194 MiB data, 929 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 29 KiB/s wr, 41 op/s Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.326 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.415 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.431 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.432 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.432 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.432 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.433 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:12:25 localhost nova_compute[280228]: 2025-12-05 10:12:25.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e164 do_prune osdmap full prune enabled Dec 5 05:12:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e165 e165: 6 total, 6 up, 6 in Dec 5 05:12:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e165: 6 total, 6 up, 6 in Dec 5 05:12:25 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:25.959 2 INFO neutron.agent.securitygroups_rpc [None req-f44d7ee3-716d-4a7c-a97c-712507a87563 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a", "osd", "allow rw pool=manila_data namespace=fsvolumens_9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:12:26 localhost podman[319884]: Dec 5 05:12:26 localhost podman[319884]: 2025-12-05 10:12:26.184044225 +0000 UTC m=+0.077144813 container create 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:12:26 localhost systemd[1]: Started libpod-conmon-149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038.scope. Dec 5 05:12:26 localhost systemd[1]: tmp-crun.ghzkvK.mount: Deactivated successfully. Dec 5 05:12:26 localhost podman[319884]: 2025-12-05 10:12:26.148578049 +0000 UTC m=+0.041678617 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:12:26 localhost systemd[1]: Started libcrun container. Dec 5 05:12:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff32a23db4107e14aefa9c14034dae606a1a0ec3eb674a5678ef2941bb24111/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:12:26 localhost podman[319884]: 2025-12-05 10:12:26.270314146 +0000 UTC m=+0.163414734 container init 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:12:26 localhost podman[319884]: 2025-12-05 10:12:26.280417265 +0000 UTC m=+0.173517853 container start 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:12:26 localhost dnsmasq[319904]: started, version 2.85 cachesize 150 Dec 5 05:12:26 localhost dnsmasq[319904]: DNS service limited to local subnets Dec 5 05:12:26 localhost dnsmasq[319904]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:12:26 localhost dnsmasq[319904]: warning: no upstream servers configured Dec 5 05:12:26 localhost dnsmasq-dhcp[319904]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:12:26 localhost dnsmasq[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/addn_hosts - 0 addresses Dec 5 05:12:26 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/host Dec 5 05:12:26 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/opts Dec 5 05:12:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:26.345 261902 INFO neutron.agent.dhcp.agent [None req-0d81356f-d2e9-4455-a214-28eb6d525448 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:25Z, description=, device_id=ac0b690a-b14f-41f4-8170-c5316ac42025, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f09cc69b-48d3-45f5-954d-3b40c37ccf6e, ip_allocation=immediate, mac_address=fa:16:3e:78:23:1c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:21Z, description=, dns_domain=, id=f90b5114-36d3-42d4-a5d4-f8518886af23, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-228533714, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59346, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2422, status=ACTIVE, subnets=['e998e1e8-d1ee-4be8-ab93-f3c4d5fbd72a'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:23Z, vlan_transparent=None, network_id=f90b5114-36d3-42d4-a5d4-f8518886af23, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2435, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:25Z on network f90b5114-36d3-42d4-a5d4-f8518886af23#033[00m Dec 5 05:12:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:26.411 261902 INFO neutron.agent.dhcp.agent [None req-a1e54083-9f41-447f-ad46-2103fd6d013f - - - - - -] DHCP configuration for ports {'d3eb6953-d7d9-47dc-90f7-ed6d7f167b73'} is completed#033[00m Dec 5 05:12:26 localhost dnsmasq[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/addn_hosts - 1 addresses Dec 5 05:12:26 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/host Dec 5 05:12:26 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/opts Dec 5 05:12:26 localhost podman[319923]: 2025-12-05 10:12:26.481007397 +0000 UTC m=+0.030419412 container kill 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:12:26 localhost nova_compute[280228]: 2025-12-05 10:12:26.503 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:26.660 261902 INFO neutron.agent.dhcp.agent [None req-e9117b32-b3d3-4bbc-84c0-6b0c46fdcdf7 - - - - - -] DHCP configuration for ports {'f09cc69b-48d3-45f5-954d-3b40c37ccf6e'} is completed#033[00m Dec 5 05:12:27 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:27.115 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:25Z, description=, device_id=ac0b690a-b14f-41f4-8170-c5316ac42025, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f09cc69b-48d3-45f5-954d-3b40c37ccf6e, ip_allocation=immediate, mac_address=fa:16:3e:78:23:1c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:21Z, description=, dns_domain=, id=f90b5114-36d3-42d4-a5d4-f8518886af23, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-228533714, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59346, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2422, status=ACTIVE, subnets=['e998e1e8-d1ee-4be8-ab93-f3c4d5fbd72a'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:23Z, vlan_transparent=None, network_id=f90b5114-36d3-42d4-a5d4-f8518886af23, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2435, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:25Z on network f90b5114-36d3-42d4-a5d4-f8518886af23#033[00m Dec 5 05:12:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:27.118 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:27 localhost nova_compute[280228]: 2025-12-05 10:12:27.119 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:27 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:27.120 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:12:27 localhost openstack_network_exporter[241668]: ERROR 10:12:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:12:27 localhost openstack_network_exporter[241668]: ERROR 10:12:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:12:27 localhost openstack_network_exporter[241668]: ERROR 10:12:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:12:27 localhost openstack_network_exporter[241668]: ERROR 10:12:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:12:27 localhost openstack_network_exporter[241668]: Dec 5 05:12:27 localhost openstack_network_exporter[241668]: ERROR 10:12:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:12:27 localhost openstack_network_exporter[241668]: Dec 5 05:12:27 localhost dnsmasq[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/addn_hosts - 1 addresses Dec 5 05:12:27 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/host Dec 5 05:12:27 localhost podman[319963]: 2025-12-05 10:12:27.319126869 +0000 UTC m=+0.053101967 container kill 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:12:27 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/opts Dec 5 05:12:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 50 KiB/s wr, 46 op/s Dec 5 05:12:27 localhost nova_compute[280228]: 2025-12-05 10:12:27.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:27 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:27.676 261902 INFO neutron.agent.dhcp.agent [None req-cff777aa-a379-4666-b676-c01faa22c83e - - - - - -] DHCP configuration for ports {'f09cc69b-48d3-45f5-954d-3b40c37ccf6e'} is completed#033[00m Dec 5 05:12:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:28 localhost nova_compute[280228]: 2025-12-05 10:12:28.282 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/183dfc32-49d7-4c92-9c61-4b9f674605ac/.meta.tmp' Dec 5 05:12:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/183dfc32-49d7-4c92-9c61-4b9f674605ac/.meta.tmp' to config b'/volumes/_nogroup/183dfc32-49d7-4c92-9c61-4b9f674605ac/.meta' Dec 5 05:12:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "format": "json"}]: dispatch Dec 5 05:12:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:28 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:28 localhost nova_compute[280228]: 2025-12-05 10:12:28.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:12:28 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:28.812 2 INFO neutron.agent.securitygroups_rpc [None req-f95cf61e-ac32-455c-9429-d352c925f648 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:29 localhost dnsmasq[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/addn_hosts - 0 addresses Dec 5 05:12:29 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/host Dec 5 05:12:29 localhost dnsmasq-dhcp[319904]: read /var/lib/neutron/dhcp/f90b5114-36d3-42d4-a5d4-f8518886af23/opts Dec 5 05:12:29 localhost podman[320001]: 2025-12-05 10:12:29.148349206 +0000 UTC m=+0.058510123 container kill 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 05:12:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 47 KiB/s wr, 43 op/s Dec 5 05:12:29 localhost ovn_controller[153000]: 2025-12-05T10:12:29Z|00315|binding|INFO|Releasing lport 56fe1e4a-4e6b-47aa-b78f-5610450884d1 from this chassis (sb_readonly=0) Dec 5 05:12:29 localhost ovn_controller[153000]: 2025-12-05T10:12:29Z|00316|binding|INFO|Setting lport 56fe1e4a-4e6b-47aa-b78f-5610450884d1 down in Southbound Dec 5 05:12:29 localhost kernel: device tap56fe1e4a-4e left promiscuous mode Dec 5 05:12:29 localhost nova_compute[280228]: 2025-12-05 10:12:29.334 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:29.341 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-f90b5114-36d3-42d4-a5d4-f8518886af23', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f90b5114-36d3-42d4-a5d4-f8518886af23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f551b37-84d0-4fe1-872d-7d48d9fc152d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=56fe1e4a-4e6b-47aa-b78f-5610450884d1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:29.343 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 56fe1e4a-4e6b-47aa-b78f-5610450884d1 in datapath f90b5114-36d3-42d4-a5d4-f8518886af23 unbound from our chassis#033[00m Dec 5 05:12:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:29.345 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f90b5114-36d3-42d4-a5d4-f8518886af23 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:12:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:29.346 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d538660c-cd69-44b2-85de-a6a1280384c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e165 do_prune osdmap full prune enabled Dec 5 05:12:29 localhost nova_compute[280228]: 2025-12-05 10:12:29.398 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e166 e166: 6 total, 6 up, 6 in Dec 5 05:12:29 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e166: 6 total, 6 up, 6 in Dec 5 05:12:29 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:29.677 2 INFO neutron.agent.securitygroups_rpc [None req-0ed3eef7-102a-4eda-8931-e0e743290eeb 81e951dffd754a2fa243820acddeb12f 0d6c9392a40c4bcc824eba6a30de937f - - default default] Security group member updated ['a21b1b69-d5db-4b48-946c-2c5a22eb225c']#033[00m Dec 5 05:12:29 localhost nova_compute[280228]: 2025-12-05 10:12:29.716 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e166 do_prune osdmap full prune enabled Dec 5 05:12:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e167 e167: 6 total, 6 up, 6 in Dec 5 05:12:30 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e167: 6 total, 6 up, 6 in Dec 5 05:12:30 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:30.494 2 INFO neutron.agent.securitygroups_rpc [None req-f71c19cb-1c49-4f2f-a17f-755852cba13d 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['12716e84-7d0a-49ef-b4ae-c42660f35fe6']#033[00m Dec 5 05:12:30 localhost dnsmasq[319904]: exiting on receipt of SIGTERM Dec 5 05:12:30 localhost podman[320039]: 2025-12-05 10:12:30.594119243 +0000 UTC m=+0.064531417 container kill 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:12:30 localhost systemd[1]: libpod-149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038.scope: Deactivated successfully. Dec 5 05:12:30 localhost podman[320051]: 2025-12-05 10:12:30.673302747 +0000 UTC m=+0.064101484 container died 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:12:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038-userdata-shm.mount: Deactivated successfully. Dec 5 05:12:30 localhost podman[320051]: 2025-12-05 10:12:30.710182316 +0000 UTC m=+0.100980983 container cleanup 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:30 localhost systemd[1]: libpod-conmon-149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038.scope: Deactivated successfully. Dec 5 05:12:30 localhost podman[320060]: 2025-12-05 10:12:30.771089351 +0000 UTC m=+0.145478925 container remove 149cbaad39763e95a0d80ddd45a3aad6c6fe1ed3bb06e7190a5dfb185fb5d038 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f90b5114-36d3-42d4-a5d4-f8518886af23, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:12:30 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:30.800 261902 INFO neutron.agent.dhcp.agent [None req-aede4f2a-47dd-4dda-b49a-8c22b06992d1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:30 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:30.826 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:12:31 localhost ovn_controller[153000]: 2025-12-05T10:12:31Z|00317|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:12:31 localhost nova_compute[280228]: 2025-12-05 10:12:31.081 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 42 KiB/s wr, 63 op/s Dec 5 05:12:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e167 do_prune osdmap full prune enabled Dec 5 05:12:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e168 e168: 6 total, 6 up, 6 in Dec 5 05:12:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e168: 6 total, 6 up, 6 in Dec 5 05:12:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "auth_id": "david", "tenant_id": "a1984fed702d4461879e97dd7c6fc401", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:12:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, tenant_id:a1984fed702d4461879e97dd7c6fc401, vol_name:cephfs) < "" Dec 5 05:12:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Dec 5 05:12:31 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 5 05:12:31 localhost ceph-mgr[286454]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Dec 5 05:12:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, tenant_id:a1984fed702d4461879e97dd7c6fc401, vol_name:cephfs) < "" Dec 5 05:12:31 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:31.580+0000 7f996f03a640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Dec 5 05:12:31 localhost ceph-mgr[286454]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Dec 5 05:12:31 localhost systemd[1]: var-lib-containers-storage-overlay-aff32a23db4107e14aefa9c14034dae606a1a0ec3eb674a5678ef2941bb24111-merged.mount: Deactivated successfully. Dec 5 05:12:31 localhost systemd[1]: run-netns-qdhcp\x2df90b5114\x2d36d3\x2d42d4\x2da5d4\x2df8518886af23.mount: Deactivated successfully. Dec 5 05:12:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e168 do_prune osdmap full prune enabled Dec 5 05:12:32 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 5 05:12:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e169 e169: 6 total, 6 up, 6 in Dec 5 05:12:32 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e169: 6 total, 6 up, 6 in Dec 5 05:12:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:33.122 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:12:33 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:33.229 2 INFO neutron.agent.securitygroups_rpc [None req-0beb9e78-8188-4d82-8b36-33bf8ed2e36d 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['12716e84-7d0a-49ef-b4ae-c42660f35fe6', 'd55adec3-95c0-449e-a33d-049a875e32be']#033[00m Dec 5 05:12:33 localhost nova_compute[280228]: 2025-12-05 10:12:33.284 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 23 KiB/s wr, 89 op/s Dec 5 05:12:34 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:34.337 2 INFO neutron.agent.securitygroups_rpc [None req-78d46b76-9a12-401f-a336-42fa87108c51 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['d55adec3-95c0-449e-a33d-049a875e32be']#033[00m Dec 5 05:12:34 localhost nova_compute[280228]: 2025-12-05 10:12:34.719 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, vol_name:cephfs) < "" Dec 5 05:12:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:12:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d8bc5d0f-8805-4ca6-8ce8-76816531c4d6/.meta.tmp' Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d8bc5d0f-8805-4ca6-8ce8-76816531c4d6/.meta.tmp' to config b'/volumes/_nogroup/d8bc5d0f-8805-4ca6-8ce8-76816531c4d6/.meta' Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, vol_name:cephfs) < "" Dec 5 05:12:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "format": "json"}]: dispatch Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, vol_name:cephfs) < "" Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, vol_name:cephfs) < "" Dec 5 05:12:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:35 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:35 localhost podman[320084]: 2025-12-05 10:12:35.21388806 +0000 UTC m=+0.082471166 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:12:35 localhost podman[320084]: 2025-12-05 10:12:35.224556347 +0000 UTC m=+0.093139463 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container) Dec 5 05:12:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "auth_id": "david", "format": "json"}]: dispatch Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:35 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '183dfc32-49d7-4c92-9c61-4b9f674605ac' Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "auth_id": "david", "format": "json"}]: dispatch Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/183dfc32-49d7-4c92-9c61-4b9f674605ac/50784130-a6ed-458b-a113-5ad377ba5a4b Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:12:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:35 localhost podman[320083]: 2025-12-05 10:12:35.274893858 +0000 UTC m=+0.149458188 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:12:35 localhost podman[320083]: 2025-12-05 10:12:35.28866704 +0000 UTC m=+0.163231390 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 05:12:35 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:12:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 15 KiB/s wr, 60 op/s Dec 5 05:12:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e169 do_prune osdmap full prune enabled Dec 5 05:12:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e170 e170: 6 total, 6 up, 6 in Dec 5 05:12:35 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e170: 6 total, 6 up, 6 in Dec 5 05:12:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 194 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 16 KiB/s wr, 40 op/s Dec 5 05:12:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e170 do_prune osdmap full prune enabled Dec 5 05:12:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e171 e171: 6 total, 6 up, 6 in Dec 5 05:12:37 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e171: 6 total, 6 up, 6 in Dec 5 05:12:38 localhost nova_compute[280228]: 2025-12-05 10:12:38.290 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "david", "format": "json"}]: dispatch Dec 5 05:12:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Dec 5 05:12:38 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 5 05:12:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Dec 5 05:12:38 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 5 05:12:38 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:38.443 2 INFO neutron.agent.securitygroups_rpc [None req-204baa73-ce5c-45b4-97d9-f3e220eb0f4a 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['72125fb1-732c-46f4-bbef-3f4bc55bdbb5']#033[00m Dec 5 05:12:38 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Dec 5 05:12:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "david", "format": "json"}]: dispatch Dec 5 05:12:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0/8c0406bc-b10e-4b16-b5be-dfcb9c87425a Dec 5 05:12:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:12:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 5 05:12:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 5 05:12:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Dec 5 05:12:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 194 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 14 KiB/s wr, 33 op/s Dec 5 05:12:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e171 do_prune osdmap full prune enabled Dec 5 05:12:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e172 e172: 6 total, 6 up, 6 in Dec 5 05:12:39 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e172: 6 total, 6 up, 6 in Dec 5 05:12:39 localhost nova_compute[280228]: 2025-12-05 10:12:39.721 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 194 MiB data, 948 MiB used, 41 GiB / 42 GiB avail; 106 KiB/s rd, 45 KiB/s wr, 149 op/s Dec 5 05:12:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e172 do_prune osdmap full prune enabled Dec 5 05:12:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e173 e173: 6 total, 6 up, 6 in Dec 5 05:12:41 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e173: 6 total, 6 up, 6 in Dec 5 05:12:42 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:42.870 2 INFO neutron.agent.securitygroups_rpc [None req-bf47f0b2-e468-4ec8-b957-b0b2af4ee729 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['6d484cfc-d88e-489c-af08-dd8717f1f0ef', '72125fb1-732c-46f4-bbef-3f4bc55bdbb5', 'a44de420-0955-49a8-bcc6-65991cbbb4d6']#033[00m Dec 5 05:12:43 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "format": "json"}]: dispatch Dec 5 05:12:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:43 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:43.032+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '183dfc32-49d7-4c92-9c61-4b9f674605ac' of type subvolume Dec 5 05:12:43 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '183dfc32-49d7-4c92-9c61-4b9f674605ac' of type subvolume Dec 5 05:12:43 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "183dfc32-49d7-4c92-9c61-4b9f674605ac", "force": true, "format": "json"}]: dispatch Dec 5 05:12:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:43 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/183dfc32-49d7-4c92-9c61-4b9f674605ac'' moved to trashcan Dec 5 05:12:43 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:183dfc32-49d7-4c92-9c61-4b9f674605ac, vol_name:cephfs) < "" Dec 5 05:12:43 localhost nova_compute[280228]: 2025-12-05 10:12:43.291 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 29 KiB/s wr, 115 op/s Dec 5 05:12:43 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:43.447 2 INFO neutron.agent.securitygroups_rpc [None req-23ad9585-8dde-473a-8067-24fa3b53a337 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['6d484cfc-d88e-489c-af08-dd8717f1f0ef', 'a44de420-0955-49a8-bcc6-65991cbbb4d6']#033[00m Dec 5 05:12:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e173 do_prune osdmap full prune enabled Dec 5 05:12:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e174 e174: 6 total, 6 up, 6 in Dec 5 05:12:43 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e174: 6 total, 6 up, 6 in Dec 5 05:12:44 localhost nova_compute[280228]: 2025-12-05 10:12:44.723 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:12:45 Dec 5 05:12:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:12:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:12:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'manila_data', 'manila_metadata', 'backups', '.mgr'] Dec 5 05:12:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:12:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:12:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:12:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:12:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:12:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:12:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:12:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 29 KiB/s wr, 115 op/s Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32) Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.453674623115578e-06 of space, bias 1.0, pg target 0.00048828125 quantized to 32 (current 32) Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:12:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00017184810045598362 of space, bias 4.0, pg target 0.13679108796296297 quantized to 16 (current 16) Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:12:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:12:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "format": "json"}]: dispatch Dec 5 05:12:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:46 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:46.292+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ec11635-5c27-465d-8a70-06bc2f1e99f2' of type subvolume Dec 5 05:12:46 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ec11635-5c27-465d-8a70-06bc2f1e99f2' of type subvolume Dec 5 05:12:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ec11635-5c27-465d-8a70-06bc2f1e99f2", "force": true, "format": "json"}]: dispatch Dec 5 05:12:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7ec11635-5c27-465d-8a70-06bc2f1e99f2'' moved to trashcan Dec 5 05:12:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ec11635-5c27-465d-8a70-06bc2f1e99f2, vol_name:cephfs) < "" Dec 5 05:12:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e174 do_prune osdmap full prune enabled Dec 5 05:12:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e175 e175: 6 total, 6 up, 6 in Dec 5 05:12:46 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e175: 6 total, 6 up, 6 in Dec 5 05:12:46 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:46.997 2 INFO neutron.agent.securitygroups_rpc [None req-3ec9d7a5-d1c6-4e3e-bf7f-f71f0310d08a 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']#033[00m Dec 5 05:12:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:12:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:12:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:12:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "format": "json"}]: dispatch Dec 5 05:12:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:47 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:47.152+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8bc5d0f-8805-4ca6-8ce8-76816531c4d6' of type subvolume Dec 5 05:12:47 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8bc5d0f-8805-4ca6-8ce8-76816531c4d6' of type subvolume Dec 5 05:12:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d8bc5d0f-8805-4ca6-8ce8-76816531c4d6", "force": true, "format": "json"}]: dispatch Dec 5 05:12:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, vol_name:cephfs) < "" Dec 5 05:12:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d8bc5d0f-8805-4ca6-8ce8-76816531c4d6'' moved to trashcan Dec 5 05:12:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8bc5d0f-8805-4ca6-8ce8-76816531c4d6, vol_name:cephfs) < "" Dec 5 05:12:47 localhost podman[320125]: 2025-12-05 10:12:47.218059832 +0000 UTC m=+0.098891219 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:12:47 localhost podman[320125]: 2025-12-05 10:12:47.231653888 +0000 UTC m=+0.112485265 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:12:47 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:12:47 localhost systemd[1]: tmp-crun.rcg6JW.mount: Deactivated successfully. Dec 5 05:12:47 localhost podman[320126]: 2025-12-05 10:12:47.323017285 +0000 UTC m=+0.201399637 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:12:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 22 KiB/s wr, 102 op/s Dec 5 05:12:47 localhost podman[320126]: 2025-12-05 10:12:47.359928336 +0000 UTC m=+0.238310678 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:12:47 localhost podman[320127]: 2025-12-05 10:12:47.36265889 +0000 UTC m=+0.234388098 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:47 localhost podman[320127]: 2025-12-05 10:12:47.380520006 +0000 UTC m=+0.252249244 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Dec 5 05:12:47 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:12:47 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:12:48 localhost nova_compute[280228]: 2025-12-05 10:12:48.330 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:48 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:48.341 2 INFO neutron.agent.securitygroups_rpc [None req-23e8adc4-f858-48c0-a5ac-96732445e499 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']#033[00m Dec 5 05:12:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 194 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 17 KiB/s wr, 79 op/s Dec 5 05:12:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "format": "json"}]: dispatch Dec 5 05:12:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f85fdc57-8808-499d-89b5-dab3ea53a537, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f85fdc57-8808-499d-89b5-dab3ea53a537, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:49 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:49.609+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f85fdc57-8808-499d-89b5-dab3ea53a537' of type subvolume Dec 5 05:12:49 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f85fdc57-8808-499d-89b5-dab3ea53a537' of type subvolume Dec 5 05:12:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f85fdc57-8808-499d-89b5-dab3ea53a537", "force": true, "format": "json"}]: dispatch Dec 5 05:12:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:12:49 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f85fdc57-8808-499d-89b5-dab3ea53a537'' moved to trashcan Dec 5 05:12:49 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f85fdc57-8808-499d-89b5-dab3ea53a537, vol_name:cephfs) < "" Dec 5 05:12:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:49.718 2 INFO neutron.agent.securitygroups_rpc [None req-31943e55-a76e-4108-a6e8-23e89683ccac 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']#033[00m Dec 5 05:12:49 localhost nova_compute[280228]: 2025-12-05 10:12:49.725 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:49 localhost podman[239519]: time="2025-12-05T10:12:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:12:49 localhost podman[239519]: @ - - [05/Dec/2025:10:12:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:12:49 localhost podman[239519]: @ - - [05/Dec/2025:10:12:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19274 "" "Go-http-client/1.1" Dec 5 05:12:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:50.020 2 INFO neutron.agent.securitygroups_rpc [None req-6d62a797-239e-4719-a132-43bb7e7d705c 6261facab26b4ea2801a990bb51e1745 dba761eb9482439aa79c2d9ffe5c0dfa - - default default] Security group member updated ['9c5d500d-a686-46a9-8ad0-737ee529f53d']#033[00m Dec 5 05:12:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 5 05:12:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e175 do_prune osdmap full prune enabled Dec 5 05:12:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e176 e176: 6 total, 6 up, 6 in Dec 5 05:12:50 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e176: 6 total, 6 up, 6 in Dec 5 05:12:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:12:51.090 2 INFO neutron.agent.securitygroups_rpc [None req-fc354d76-bc0c-445a-824b-9fa964ac8ee3 14a89074968e40cbb69c8c73a9492d34 66efd68a4ed34b1a976a072e82fd9b38 - - default default] Security group member updated ['4fcc1bd9-e5ed-4327-9c0c-7b6f78b91f24']#033[00m Dec 5 05:12:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 194 MiB data, 957 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 48 KiB/s wr, 118 op/s Dec 5 05:12:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e176 do_prune osdmap full prune enabled Dec 5 05:12:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e177 e177: 6 total, 6 up, 6 in Dec 5 05:12:51 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e177: 6 total, 6 up, 6 in Dec 5 05:12:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e177 do_prune osdmap full prune enabled Dec 5 05:12:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e178 e178: 6 total, 6 up, 6 in Dec 5 05:12:52 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e178: 6 total, 6 up, 6 in Dec 5 05:12:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "auth_id": "admin", "format": "json"}]: dispatch Dec 5 05:12:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:52 localhost ceph-mgr[286454]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Dec 5 05:12:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:52 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:52.948+0000 7f996f03a640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Dec 5 05:12:52 localhost ceph-mgr[286454]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Dec 5 05:12:53 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "format": "json"}]: dispatch Dec 5 05:12:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:12:53 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:12:53.045+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9036423c-a4fb-4bd9-97cc-8e58d185d4d0' of type subvolume Dec 5 05:12:53 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9036423c-a4fb-4bd9-97cc-8e58d185d4d0' of type subvolume Dec 5 05:12:53 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9036423c-a4fb-4bd9-97cc-8e58d185d4d0", "force": true, "format": "json"}]: dispatch Dec 5 05:12:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:53 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9036423c-a4fb-4bd9-97cc-8e58d185d4d0'' moved to trashcan Dec 5 05:12:53 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:12:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9036423c-a4fb-4bd9-97cc-8e58d185d4d0, vol_name:cephfs) < "" Dec 5 05:12:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 194 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 38 KiB/s wr, 51 op/s Dec 5 05:12:53 localhost nova_compute[280228]: 2025-12-05 10:12:53.359 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:53 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:53.847 261902 INFO neutron.agent.linux.ip_lib [None req-5ce6dd29-bdf7-486d-8e13-86f1b49bb77e - - - - - -] Device tap30c52cca-3e cannot be used as it has no MAC address#033[00m Dec 5 05:12:53 localhost nova_compute[280228]: 2025-12-05 10:12:53.875 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:53 localhost kernel: device tap30c52cca-3e entered promiscuous mode Dec 5 05:12:53 localhost nova_compute[280228]: 2025-12-05 10:12:53.880 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:53 localhost NetworkManager[5960]: [1764929573.8815] manager: (tap30c52cca-3e): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Dec 5 05:12:53 localhost ovn_controller[153000]: 2025-12-05T10:12:53Z|00318|binding|INFO|Claiming lport 30c52cca-3eb1-429c-b9d7-c0d9ea12afab for this chassis. Dec 5 05:12:53 localhost ovn_controller[153000]: 2025-12-05T10:12:53Z|00319|binding|INFO|30c52cca-3eb1-429c-b9d7-c0d9ea12afab: Claiming unknown Dec 5 05:12:53 localhost systemd-udevd[320196]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost ovn_controller[153000]: 2025-12-05T10:12:53Z|00320|binding|INFO|Setting lport 30c52cca-3eb1-429c-b9d7-c0d9ea12afab ovn-installed in OVS Dec 5 05:12:53 localhost nova_compute[280228]: 2025-12-05 10:12:53.912 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost journal[228791]: ethtool ioctl error on tap30c52cca-3e: No such device Dec 5 05:12:53 localhost nova_compute[280228]: 2025-12-05 10:12:53.953 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:53 localhost nova_compute[280228]: 2025-12-05 10:12:53.975 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:54.517 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-83d53f2e-0ab1-46e4-96eb-f037249fb0f6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83d53f2e-0ab1-46e4-96eb-f037249fb0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3eaffe03-865a-40fb-8965-3f8439694159, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=30c52cca-3eb1-429c-b9d7-c0d9ea12afab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:12:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:54.520 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 30c52cca-3eb1-429c-b9d7-c0d9ea12afab in datapath 83d53f2e-0ab1-46e4-96eb-f037249fb0f6 bound to our chassis#033[00m Dec 5 05:12:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:54.522 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 83d53f2e-0ab1-46e4-96eb-f037249fb0f6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:12:54 localhost ovn_metadata_agent[158815]: 2025-12-05 10:12:54.523 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9faa7c-d188-44f6-8f01-04a281492bd3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:12:54 localhost ovn_controller[153000]: 2025-12-05T10:12:54Z|00321|binding|INFO|Setting lport 30c52cca-3eb1-429c-b9d7-c0d9ea12afab up in Southbound Dec 5 05:12:54 localhost nova_compute[280228]: 2025-12-05 10:12:54.726 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:54 localhost podman[320268]: Dec 5 05:12:54 localhost podman[320268]: 2025-12-05 10:12:54.769585614 +0000 UTC m=+0.094186065 container create 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:12:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:12:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:12:54 localhost podman[320268]: 2025-12-05 10:12:54.711934308 +0000 UTC m=+0.036534809 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:12:54 localhost systemd[1]: Started libpod-conmon-4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1.scope. Dec 5 05:12:54 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, vol_name:cephfs) < "" Dec 5 05:12:54 localhost systemd[1]: Started libcrun container. Dec 5 05:12:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5000aeb28f302d7fab8313f0cbb340a1a8057c6efc78afef3ecb78b8b0d71129/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:12:54 localhost podman[320268]: 2025-12-05 10:12:54.857466914 +0000 UTC m=+0.182067375 container init 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:54 localhost dnsmasq[320308]: started, version 2.85 cachesize 150 Dec 5 05:12:54 localhost dnsmasq[320308]: DNS service limited to local subnets Dec 5 05:12:54 localhost dnsmasq[320308]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:12:54 localhost dnsmasq[320308]: warning: no upstream servers configured Dec 5 05:12:54 localhost dnsmasq-dhcp[320308]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:12:54 localhost dnsmasq[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/addn_hosts - 0 addresses Dec 5 05:12:54 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/host Dec 5 05:12:54 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/opts Dec 5 05:12:54 localhost podman[320282]: 2025-12-05 10:12:54.907020372 +0000 UTC m=+0.090736980 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:12:54 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1c9dbba2-5de3-4dd4-833c-092810bc5276/.meta.tmp' Dec 5 05:12:54 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1c9dbba2-5de3-4dd4-833c-092810bc5276/.meta.tmp' to config b'/volumes/_nogroup/1c9dbba2-5de3-4dd4-833c-092810bc5276/.meta' Dec 5 05:12:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, vol_name:cephfs) < "" Dec 5 05:12:54 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "format": "json"}]: dispatch Dec 5 05:12:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, vol_name:cephfs) < "" Dec 5 05:12:54 localhost podman[320268]: 2025-12-05 10:12:54.928161669 +0000 UTC m=+0.252762090 container start 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:12:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, vol_name:cephfs) < "" Dec 5 05:12:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:54 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e178 do_prune osdmap full prune enabled Dec 5 05:12:54 localhost podman[320282]: 2025-12-05 10:12:54.963699877 +0000 UTC m=+0.147416485 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS) Dec 5 05:12:54 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e179 e179: 6 total, 6 up, 6 in Dec 5 05:12:54 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e179: 6 total, 6 up, 6 in Dec 5 05:12:54 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:12:55 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:55.003 261902 INFO neutron.agent.dhcp.agent [None req-5ce6dd29-bdf7-486d-8e13-86f1b49bb77e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:53Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a1a64115-23e2-4766-b79e-05c7fa388b98, ip_allocation=immediate, mac_address=fa:16:3e:12:7a:15, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:52Z, description=, dns_domain=, id=83d53f2e-0ab1-46e4-96eb-f037249fb0f6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-477931268, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42242, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2527, status=ACTIVE, subnets=['04a26887-75dd-435a-8bed-8b4d02bac07b'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:53Z, vlan_transparent=None, network_id=83d53f2e-0ab1-46e4-96eb-f037249fb0f6, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2532, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:53Z on network 83d53f2e-0ab1-46e4-96eb-f037249fb0f6#033[00m Dec 5 05:12:55 localhost podman[320283]: 2025-12-05 10:12:55.024673604 +0000 UTC m=+0.207713360 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:12:55 localhost podman[320283]: 2025-12-05 10:12:55.06277571 +0000 UTC m=+0.245815426 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:12:55 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:12:55 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:55.138 261902 INFO neutron.agent.dhcp.agent [None req-b553d9ef-3101-41cf-821c-dd15fa792b3d - - - - - -] DHCP configuration for ports {'7cb4bcd3-685e-423e-956c-45f8ec04e683'} is completed#033[00m Dec 5 05:12:55 localhost dnsmasq[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/addn_hosts - 1 addresses Dec 5 05:12:55 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/host Dec 5 05:12:55 localhost podman[320351]: 2025-12-05 10:12:55.192564895 +0000 UTC m=+0.059442722 container kill 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:12:55 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/opts Dec 5 05:12:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 194 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 51 KiB/s wr, 68 op/s Dec 5 05:12:55 localhost systemd[1]: tmp-crun.vOEFlI.mount: Deactivated successfully. Dec 5 05:12:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:12:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e179 do_prune osdmap full prune enabled Dec 5 05:12:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e180 e180: 6 total, 6 up, 6 in Dec 5 05:12:55 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e180: 6 total, 6 up, 6 in Dec 5 05:12:57 localhost openstack_network_exporter[241668]: ERROR 10:12:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:12:57 localhost openstack_network_exporter[241668]: ERROR 10:12:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:12:57 localhost openstack_network_exporter[241668]: ERROR 10:12:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:12:57 localhost openstack_network_exporter[241668]: ERROR 10:12:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:12:57 localhost openstack_network_exporter[241668]: Dec 5 05:12:57 localhost openstack_network_exporter[241668]: ERROR 10:12:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:12:57 localhost openstack_network_exporter[241668]: Dec 5 05:12:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 194 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 30 KiB/s wr, 70 op/s Dec 5 05:12:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:57.555 261902 INFO neutron.agent.dhcp.agent [None req-f08fce5c-3b46-401d-b8fd-9d24032f4def - - - - - -] DHCP configuration for ports {'a1a64115-23e2-4766-b79e-05c7fa388b98'} is completed#033[00m Dec 5 05:12:58 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:58.332 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:12:53Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a1a64115-23e2-4766-b79e-05c7fa388b98, ip_allocation=immediate, mac_address=fa:16:3e:12:7a:15, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:52Z, description=, dns_domain=, id=83d53f2e-0ab1-46e4-96eb-f037249fb0f6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-477931268, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42242, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2527, status=ACTIVE, subnets=['04a26887-75dd-435a-8bed-8b4d02bac07b'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:53Z, vlan_transparent=None, network_id=83d53f2e-0ab1-46e4-96eb-f037249fb0f6, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2532, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:12:53Z on network 83d53f2e-0ab1-46e4-96eb-f037249fb0f6#033[00m Dec 5 05:12:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e180 do_prune osdmap full prune enabled Dec 5 05:12:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e181 e181: 6 total, 6 up, 6 in Dec 5 05:12:58 localhost nova_compute[280228]: 2025-12-05 10:12:58.423 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:58 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e181: 6 total, 6 up, 6 in Dec 5 05:12:58 localhost dnsmasq[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/addn_hosts - 1 addresses Dec 5 05:12:58 localhost podman[320390]: 2025-12-05 10:12:58.510671857 +0000 UTC m=+0.065083423 container kill 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:12:58 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/host Dec 5 05:12:58 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/opts Dec 5 05:12:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:12:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:12:59 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:12:59.084 261902 INFO neutron.agent.dhcp.agent [None req-0573315b-68fa-4632-83a3-518a8c0456c1 - - - - - -] DHCP configuration for ports {'a1a64115-23e2-4766-b79e-05c7fa388b98'} is completed#033[00m Dec 5 05:12:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta.tmp' Dec 5 05:12:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta.tmp' to config b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta' Dec 5 05:12:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:12:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "format": "json"}]: dispatch Dec 5 05:12:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:12:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:12:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:12:59 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:12:59 localhost nova_compute[280228]: 2025-12-05 10:12:59.211 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:12:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 194 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 27 KiB/s wr, 63 op/s Dec 5 05:12:59 localhost nova_compute[280228]: 2025-12-05 10:12:59.728 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e181 do_prune osdmap full prune enabled Dec 5 05:13:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e182 e182: 6 total, 6 up, 6 in Dec 5 05:13:00 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e182: 6 total, 6 up, 6 in Dec 5 05:13:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 39 KiB/s wr, 96 op/s Dec 5 05:13:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 31 KiB/s wr, 81 op/s Dec 5 05:13:03 localhost nova_compute[280228]: 2025-12-05 10:13:03.469 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:03.919 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:13:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:03.919 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:13:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:03.920 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:13:04 localhost nova_compute[280228]: 2025-12-05 10:13:04.730 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "snap_name": "c5eca2b7-83a5-4542-85a4-aa8cf97f1b78", "format": "json"}]: dispatch Dec 5 05:13:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5eca2b7-83a5-4542-85a4-aa8cf97f1b78, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5eca2b7-83a5-4542-85a4-aa8cf97f1b78, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 8.9 KiB/s wr, 28 op/s Dec 5 05:13:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e182 do_prune osdmap full prune enabled Dec 5 05:13:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e183 e183: 6 total, 6 up, 6 in Dec 5 05:13:05 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e183: 6 total, 6 up, 6 in Dec 5 05:13:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:13:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:13:06 localhost podman[320413]: 2025-12-05 10:13:06.20200912 +0000 UTC m=+0.092191434 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, release=1755695350, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:13:06 localhost podman[320413]: 2025-12-05 10:13:06.212663285 +0000 UTC m=+0.102845619 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:13:06 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:13:06 localhost podman[320412]: 2025-12-05 10:13:06.166947836 +0000 UTC m=+0.062442643 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd) Dec 5 05:13:06 localhost podman[320412]: 2025-12-05 10:13:06.302572039 +0000 UTC m=+0.198066786 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=multipathd) Dec 5 05:13:06 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:13:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 22 KiB/s wr, 29 op/s Dec 5 05:13:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:07.339 261902 INFO neutron.agent.linux.ip_lib [None req-98696842-ccc7-4968-b952-55303928cbb4 - - - - - -] Device tapeac32ec1-84 cannot be used as it has no MAC address#033[00m Dec 5 05:13:07 localhost nova_compute[280228]: 2025-12-05 10:13:07.365 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:07 localhost kernel: device tapeac32ec1-84 entered promiscuous mode Dec 5 05:13:07 localhost NetworkManager[5960]: [1764929587.3740] manager: (tapeac32ec1-84): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Dec 5 05:13:07 localhost nova_compute[280228]: 2025-12-05 10:13:07.373 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:07 localhost ovn_controller[153000]: 2025-12-05T10:13:07Z|00322|binding|INFO|Claiming lport eac32ec1-842e-49cc-a28e-3a1959ae92a7 for this chassis. Dec 5 05:13:07 localhost ovn_controller[153000]: 2025-12-05T10:13:07Z|00323|binding|INFO|eac32ec1-842e-49cc-a28e-3a1959ae92a7: Claiming unknown Dec 5 05:13:07 localhost systemd-udevd[320462]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:13:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:07.390 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-2bfd3442-9afd-4211-8b18-aa6d09646799', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bfd3442-9afd-4211-8b18-aa6d09646799', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc2806be-6029-46c5-8fbe-8ef6f0f90cb2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=eac32ec1-842e-49cc-a28e-3a1959ae92a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:07.392 158820 INFO neutron.agent.ovn.metadata.agent [-] Port eac32ec1-842e-49cc-a28e-3a1959ae92a7 in datapath 2bfd3442-9afd-4211-8b18-aa6d09646799 bound to our chassis#033[00m Dec 5 05:13:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:07.395 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2bfd3442-9afd-4211-8b18-aa6d09646799 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:13:07 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:07.396 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee3251a-19f3-4f28-8a5f-152798224d0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost nova_compute[280228]: 2025-12-05 10:13:07.408 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:07 localhost ovn_controller[153000]: 2025-12-05T10:13:07Z|00324|binding|INFO|Setting lport eac32ec1-842e-49cc-a28e-3a1959ae92a7 ovn-installed in OVS Dec 5 05:13:07 localhost ovn_controller[153000]: 2025-12-05T10:13:07Z|00325|binding|INFO|Setting lport eac32ec1-842e-49cc-a28e-3a1959ae92a7 up in Southbound Dec 5 05:13:07 localhost nova_compute[280228]: 2025-12-05 10:13:07.410 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost journal[228791]: ethtool ioctl error on tapeac32ec1-84: No such device Dec 5 05:13:07 localhost nova_compute[280228]: 2025-12-05 10:13:07.452 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:07 localhost nova_compute[280228]: 2025-12-05 10:13:07.474 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:08 localhost podman[320533]: Dec 5 05:13:08 localhost podman[320533]: 2025-12-05 10:13:08.304769541 +0000 UTC m=+0.084370094 container create 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:08 localhost systemd[1]: Started libpod-conmon-52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf.scope. Dec 5 05:13:08 localhost podman[320533]: 2025-12-05 10:13:08.263507119 +0000 UTC m=+0.043107712 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:13:08 localhost systemd[1]: Started libcrun container. Dec 5 05:13:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf3b21ca4403adf2108940b553ec1b91860e530bc18d9cd853ff9e2e988f8118/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:13:08 localhost podman[320533]: 2025-12-05 10:13:08.379016685 +0000 UTC m=+0.158617248 container init 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 05:13:08 localhost podman[320533]: 2025-12-05 10:13:08.389508006 +0000 UTC m=+0.169108599 container start 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:13:08 localhost dnsmasq[320552]: started, version 2.85 cachesize 150 Dec 5 05:13:08 localhost dnsmasq[320552]: DNS service limited to local subnets Dec 5 05:13:08 localhost dnsmasq[320552]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:13:08 localhost dnsmasq[320552]: warning: no upstream servers configured Dec 5 05:13:08 localhost dnsmasq-dhcp[320552]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Dec 5 05:13:08 localhost dnsmasq[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/addn_hosts - 0 addresses Dec 5 05:13:08 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/host Dec 5 05:13:08 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/opts Dec 5 05:13:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:08.437 261902 INFO neutron.agent.dhcp.agent [None req-98696842-ccc7-4968-b952-55303928cbb4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:07Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=57bbfebb-42fa-4fbc-9278-9a7adf24ddb6, ip_allocation=immediate, mac_address=fa:16:3e:9f:75:e6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:59Z, description=, dns_domain=, id=2bfd3442-9afd-4211-8b18-aa6d09646799, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1147166198, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2028, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2561, status=ACTIVE, subnets=['14b8af68-d4a3-4f41-923e-8ab25e1bdc4d'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:05Z, vlan_transparent=None, network_id=2bfd3442-9afd-4211-8b18-aa6d09646799, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2597, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:07Z on network 2bfd3442-9afd-4211-8b18-aa6d09646799#033[00m Dec 5 05:13:08 localhost nova_compute[280228]: 2025-12-05 10:13:08.512 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:08 localhost dnsmasq[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/addn_hosts - 1 addresses Dec 5 05:13:08 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/host Dec 5 05:13:08 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/opts Dec 5 05:13:08 localhost podman[320586]: 2025-12-05 10:13:08.621351524 +0000 UTC m=+0.058467711 container kill 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:13:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:08.817 261902 INFO neutron.agent.dhcp.agent [None req-e3a7525f-cb40-4fb0-b494-7c3a1b007300 - - - - - -] DHCP configuration for ports {'aae4be35-3688-42da-a1e3-1458ecb4a767'} is completed#033[00m Dec 5 05:13:08 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:08.974 261902 INFO neutron.agent.dhcp.agent [None req-cbce9806-7d81-42a5-8d2c-5a88aa432c82 - - - - - -] DHCP configuration for ports {'57bbfebb-42fa-4fbc-9278-9a7adf24ddb6'} is completed#033[00m Dec 5 05:13:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:09.118 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:09 localhost systemd[1]: tmp-crun.CdwfdV.mount: Deactivated successfully. Dec 5 05:13:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 21 KiB/s wr, 28 op/s Dec 5 05:13:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:13:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:13:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:13:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:13:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:13:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:13:09 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev d308a83f-cfc6-4272-ae40-351e8ff81849 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:13:09 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev d308a83f-cfc6-4272-ae40-351e8ff81849 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:13:09 localhost ceph-mgr[286454]: [progress INFO root] Completed event d308a83f-cfc6-4272-ae40-351e8ff81849 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:13:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:13:09 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:13:09 localhost nova_compute[280228]: 2025-12-05 10:13:09.759 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:09 localhost sshd[320678]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:13:10 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:13:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:13:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:13:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:13:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:13:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:13:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s rd, 11 KiB/s wr, 4 op/s Dec 5 05:13:12 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:12.045 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:07Z, description=, device_id=e99b4337-1c94-4de2-ad86-a1f72f48d701, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=57bbfebb-42fa-4fbc-9278-9a7adf24ddb6, ip_allocation=immediate, mac_address=fa:16:3e:9f:75:e6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:12:59Z, description=, dns_domain=, id=2bfd3442-9afd-4211-8b18-aa6d09646799, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1147166198, port_security_enabled=True, project_id=0d15dccf4c864d558d055b0c7cd1cccc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2028, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2561, status=ACTIVE, subnets=['14b8af68-d4a3-4f41-923e-8ab25e1bdc4d'], tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:05Z, vlan_transparent=None, network_id=2bfd3442-9afd-4211-8b18-aa6d09646799, port_security_enabled=False, project_id=0d15dccf4c864d558d055b0c7cd1cccc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2597, status=DOWN, tags=[], tenant_id=0d15dccf4c864d558d055b0c7cd1cccc, updated_at=2025-12-05T10:13:07Z on network 2bfd3442-9afd-4211-8b18-aa6d09646799#033[00m Dec 5 05:13:12 localhost dnsmasq[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/addn_hosts - 1 addresses Dec 5 05:13:12 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/host Dec 5 05:13:12 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/opts Dec 5 05:13:12 localhost podman[320696]: 2025-12-05 10:13:12.197260142 +0000 UTC m=+0.037874221 container kill 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:13:12 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:12.700 261902 INFO neutron.agent.dhcp.agent [None req-7b947e93-eff9-446b-a452-d748724edffa - - - - - -] DHCP configuration for ports {'57bbfebb-42fa-4fbc-9278-9a7adf24ddb6'} is completed#033[00m Dec 5 05:13:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s wr, 1 op/s Dec 5 05:13:13 localhost nova_compute[280228]: 2025-12-05 10:13:13.555 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "snap_name": "c5eca2b7-83a5-4542-85a4-aa8cf97f1b78_0261bf7d-9639-41c3-83b1-85aebad662f3", "force": true, "format": "json"}]: dispatch Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5eca2b7-83a5-4542-85a4-aa8cf97f1b78_0261bf7d-9639-41c3-83b1-85aebad662f3, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta.tmp' Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta.tmp' to config b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta' Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5eca2b7-83a5-4542-85a4-aa8cf97f1b78_0261bf7d-9639-41c3-83b1-85aebad662f3, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "snap_name": "c5eca2b7-83a5-4542-85a4-aa8cf97f1b78", "force": true, "format": "json"}]: dispatch Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5eca2b7-83a5-4542-85a4-aa8cf97f1b78, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta.tmp' Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta.tmp' to config b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b/.meta' Dec 5 05:13:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5eca2b7-83a5-4542-85a4-aa8cf97f1b78, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:14 localhost nova_compute[280228]: 2025-12-05 10:13:14.793 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:13:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:13:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:13:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:13:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:13:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:13:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 195 MiB data, 959 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s wr, 1 op/s Dec 5 05:13:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:16 localhost nova_compute[280228]: 2025-12-05 10:13:16.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:13:16.687 2 INFO neutron.agent.securitygroups_rpc [None req-ecee759d-30fd-48d6-8d6a-f8b6c3142cb3 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']#033[00m Dec 5 05:13:16 localhost neutron_sriov_agent[254996]: 2025-12-05 10:13:16.863 2 INFO neutron.agent.securitygroups_rpc [None req-ecee759d-30fd-48d6-8d6a-f8b6c3142cb3 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']#033[00m Dec 5 05:13:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 624 B/s rd, 19 KiB/s wr, 4 op/s Dec 5 05:13:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:13:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:13:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:13:18 localhost systemd[1]: tmp-crun.NnayZ2.mount: Deactivated successfully. Dec 5 05:13:18 localhost podman[320718]: 2025-12-05 10:13:18.186388956 +0000 UTC m=+0.065548479 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:18 localhost podman[320717]: 2025-12-05 10:13:18.205068977 +0000 UTC m=+0.083449955 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:13:18 localhost podman[320718]: 2025-12-05 10:13:18.223568444 +0000 UTC m=+0.102727957 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:13:18 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:13:18 localhost podman[320717]: 2025-12-05 10:13:18.240690308 +0000 UTC m=+0.119071306 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Dec 5 05:13:18 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:13:18 localhost podman[320716]: 2025-12-05 10:13:18.296295541 +0000 UTC m=+0.176772893 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:13:18 localhost podman[320716]: 2025-12-05 10:13:18.308605127 +0000 UTC m=+0.189082489 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:13:18 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:13:18 localhost nova_compute[280228]: 2025-12-05 10:13:18.502 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:18 localhost nova_compute[280228]: 2025-12-05 10:13:18.593 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "format": "json"}]: dispatch Dec 5 05:13:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:13:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:13:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:13:18.852+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57dd3147-466a-4e5d-b79a-77b753d04f4b' of type subvolume Dec 5 05:13:18 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57dd3147-466a-4e5d-b79a-77b753d04f4b' of type subvolume Dec 5 05:13:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57dd3147-466a-4e5d-b79a-77b753d04f4b", "force": true, "format": "json"}]: dispatch Dec 5 05:13:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57dd3147-466a-4e5d-b79a-77b753d04f4b'' moved to trashcan Dec 5 05:13:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:13:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57dd3147-466a-4e5d-b79a-77b753d04f4b, vol_name:cephfs) < "" Dec 5 05:13:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 9.4 KiB/s wr, 3 op/s Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.530 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.530 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:13:19 localhost neutron_sriov_agent[254996]: 2025-12-05 10:13:19.557 2 INFO neutron.agent.securitygroups_rpc [None req-0d4ab53e-befa-4191-ae98-022935d1a9c9 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']#033[00m Dec 5 05:13:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:13:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.795 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta.tmp' Dec 5 05:13:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta.tmp' to config b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta' Dec 5 05:13:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "format": "json"}]: dispatch Dec 5 05:13:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:13:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:13:19 localhost podman[239519]: time="2025-12-05T10:13:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:13:19 localhost podman[239519]: @ - - [05/Dec/2025:10:13:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159747 "" "Go-http-client/1.1" Dec 5 05:13:19 localhost podman[239519]: @ - - [05/Dec/2025:10:13:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20215 "" "Go-http-client/1.1" Dec 5 05:13:19 localhost nova_compute[280228]: 2025-12-05 10:13:19.957 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:13:20 localhost neutron_sriov_agent[254996]: 2025-12-05 10:13:20.020 2 INFO neutron.agent.securitygroups_rpc [None req-b270bd1b-3b7d-41c8-aca6-410a9dbe297f 9d972c557bac460788722cbb72a5063b 1a3d7fc340f84c5699757971056327c6 - - default default] Security group member updated ['8a2cc3b9-107e-4895-9989-8e73163dac8e']#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.029 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.030 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.243 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.269 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.270 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11157MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.270 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.270 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:13:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.924 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.925 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.925 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:13:20 localhost nova_compute[280228]: 2025-12-05 10:13:20.977 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:13:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 19 KiB/s wr, 6 op/s Dec 5 05:13:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:13:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2587116746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:13:21 localhost nova_compute[280228]: 2025-12-05 10:13:21.444 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:13:21 localhost nova_compute[280228]: 2025-12-05 10:13:21.452 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:13:21 localhost nova_compute[280228]: 2025-12-05 10:13:21.470 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:13:21 localhost nova_compute[280228]: 2025-12-05 10:13:21.473 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:13:21 localhost nova_compute[280228]: 2025-12-05 10:13:21.473 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:13:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e183 do_prune osdmap full prune enabled Dec 5 05:13:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e184 e184: 6 total, 6 up, 6 in Dec 5 05:13:21 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e184: 6 total, 6 up, 6 in Dec 5 05:13:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "format": "json"}]: dispatch Dec 5 05:13:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:13:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:13:22 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:13:22.199+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c9dbba2-5de3-4dd4-833c-092810bc5276' of type subvolume Dec 5 05:13:22 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c9dbba2-5de3-4dd4-833c-092810bc5276' of type subvolume Dec 5 05:13:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c9dbba2-5de3-4dd4-833c-092810bc5276", "force": true, "format": "json"}]: dispatch Dec 5 05:13:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, vol_name:cephfs) < "" Dec 5 05:13:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1c9dbba2-5de3-4dd4-833c-092810bc5276'' moved to trashcan Dec 5 05:13:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:13:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c9dbba2-5de3-4dd4-833c-092810bc5276, vol_name:cephfs) < "" Dec 5 05:13:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 22 KiB/s wr, 8 op/s Dec 5 05:13:23 localhost nova_compute[280228]: 2025-12-05 10:13:23.621 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "snap_name": "49ab100e-1829-4d48-9ef2-7980a4e6fb8c", "format": "json"}]: dispatch Dec 5 05:13:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:49ab100e-1829-4d48-9ef2-7980a4e6fb8c, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:49ab100e-1829-4d48-9ef2-7980a4e6fb8c, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.475 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.615 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.616 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.616 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.616 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:13:24 localhost nova_compute[280228]: 2025-12-05 10:13:24.796 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:13:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:13:25 localhost podman[320819]: 2025-12-05 10:13:25.186908937 +0000 UTC m=+0.068070396 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 05:13:25 localhost systemd[1]: tmp-crun.iQCdYS.mount: Deactivated successfully. Dec 5 05:13:25 localhost podman[320819]: 2025-12-05 10:13:25.264733449 +0000 UTC m=+0.145894908 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:13:25 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:13:25 localhost podman[320820]: 2025-12-05 10:13:25.26706007 +0000 UTC m=+0.143142253 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:13:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 195 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 2.4 KiB/s rd, 22 KiB/s wr, 8 op/s Dec 5 05:13:25 localhost podman[320820]: 2025-12-05 10:13:25.351764184 +0000 UTC m=+0.227846327 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:13:25 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:13:25 localhost nova_compute[280228]: 2025-12-05 10:13:25.616 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:13:25 localhost nova_compute[280228]: 2025-12-05 10:13:25.641 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:13:25 localhost nova_compute[280228]: 2025-12-05 10:13:25.641 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:13:25 localhost nova_compute[280228]: 2025-12-05 10:13:25.642 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:25 localhost nova_compute[280228]: 2025-12-05 10:13:25.642 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:13:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:27 localhost openstack_network_exporter[241668]: ERROR 10:13:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:13:27 localhost openstack_network_exporter[241668]: ERROR 10:13:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:13:27 localhost openstack_network_exporter[241668]: ERROR 10:13:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:13:27 localhost openstack_network_exporter[241668]: ERROR 10:13:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:13:27 localhost openstack_network_exporter[241668]: Dec 5 05:13:27 localhost openstack_network_exporter[241668]: ERROR 10:13:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:13:27 localhost openstack_network_exporter[241668]: Dec 5 05:13:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 69 op/s Dec 5 05:13:27 localhost nova_compute[280228]: 2025-12-05 10:13:27.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:27 localhost nova_compute[280228]: 2025-12-05 10:13:27.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:27 localhost nova_compute[280228]: 2025-12-05 10:13:27.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:28.460 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:28 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:28.461 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:13:28 localhost nova_compute[280228]: 2025-12-05 10:13:28.497 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:28 localhost nova_compute[280228]: 2025-12-05 10:13:28.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:13:28 localhost nova_compute[280228]: 2025-12-05 10:13:28.622 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "snap_name": "49ab100e-1829-4d48-9ef2-7980a4e6fb8c_2141957f-76e9-4664-9cf3-8ecf8547c27d", "force": true, "format": "json"}]: dispatch Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:49ab100e-1829-4d48-9ef2-7980a4e6fb8c_2141957f-76e9-4664-9cf3-8ecf8547c27d, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta.tmp' Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta.tmp' to config b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta' Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:49ab100e-1829-4d48-9ef2-7980a4e6fb8c_2141957f-76e9-4664-9cf3-8ecf8547c27d, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "snap_name": "49ab100e-1829-4d48-9ef2-7980a4e6fb8c", "force": true, "format": "json"}]: dispatch Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:49ab100e-1829-4d48-9ef2-7980a4e6fb8c, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta.tmp' Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta.tmp' to config b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b/.meta' Dec 5 05:13:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:49ab100e-1829-4d48-9ef2-7980a4e6fb8c, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 69 op/s Dec 5 05:13:29 localhost nova_compute[280228]: 2025-12-05 10:13:29.799 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:30 localhost nova_compute[280228]: 2025-12-05 10:13:30.054 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e184 do_prune osdmap full prune enabled Dec 5 05:13:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e185 e185: 6 total, 6 up, 6 in Dec 5 05:13:30 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e185: 6 total, 6 up, 6 in Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.902008) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929610902072, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1460, "num_deletes": 259, "total_data_size": 1338156, "memory_usage": 1381432, "flush_reason": "Manual Compaction"} Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929610915680, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 1049338, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30978, "largest_seqno": 32437, "table_properties": {"data_size": 1043570, "index_size": 2919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15643, "raw_average_key_size": 22, "raw_value_size": 1030874, "raw_average_value_size": 1466, "num_data_blocks": 126, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929538, "oldest_key_time": 1764929538, "file_creation_time": 1764929610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 13728 microseconds, and 4733 cpu microseconds. Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.915741) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 1049338 bytes OK Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.915771) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.917721) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.917747) EVENT_LOG_v1 {"time_micros": 1764929610917739, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.917771) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1331373, prev total WAL file size 1331373, number of live WAL files 2. Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.918498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323538' seq:0, type:0; will stop at (end) Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(1024KB)], [54(16MB)] Dec 5 05:13:30 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929610918600, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 18812805, "oldest_snapshot_seqno": -1} Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 12948 keys, 16958682 bytes, temperature: kUnknown Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929611025107, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 16958682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16886406, "index_size": 38831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 346707, "raw_average_key_size": 26, "raw_value_size": 16667541, "raw_average_value_size": 1287, "num_data_blocks": 1457, "num_entries": 12948, "num_filter_entries": 12948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.025539) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16958682 bytes Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.027307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.5 rd, 159.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.9 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(34.1) write-amplify(16.2) OK, records in: 13452, records dropped: 504 output_compression: NoCompression Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.027339) EVENT_LOG_v1 {"time_micros": 1764929611027323, "job": 32, "event": "compaction_finished", "compaction_time_micros": 106607, "compaction_time_cpu_micros": 57686, "output_level": 6, "num_output_files": 1, "total_output_size": 16958682, "num_input_records": 13452, "num_output_records": 12948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929611027619, "job": 32, "event": "table_file_deletion", "file_number": 56} Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929611030538, "job": 32, "event": "table_file_deletion", "file_number": 54} Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:30.918366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.030656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.030665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.030668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.030671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:13:31 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:13:31.030674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:13:31 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:31.191 261902 INFO neutron.agent.linux.ip_lib [None req-1dd7bb9d-136d-49bf-a226-6f797ab1a204 - - - - - -] Device tap279b155b-92 cannot be used as it has no MAC address#033[00m Dec 5 05:13:31 localhost nova_compute[280228]: 2025-12-05 10:13:31.216 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:31 localhost kernel: device tap279b155b-92 entered promiscuous mode Dec 5 05:13:31 localhost ovn_controller[153000]: 2025-12-05T10:13:31Z|00326|binding|INFO|Claiming lport 279b155b-925e-4e21-8183-881a24ef45a3 for this chassis. Dec 5 05:13:31 localhost ovn_controller[153000]: 2025-12-05T10:13:31Z|00327|binding|INFO|279b155b-925e-4e21-8183-881a24ef45a3: Claiming unknown Dec 5 05:13:31 localhost nova_compute[280228]: 2025-12-05 10:13:31.224 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:31 localhost NetworkManager[5960]: [1764929611.2287] manager: (tap279b155b-92): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Dec 5 05:13:31 localhost systemd-udevd[320878]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:13:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:31.233 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-3385630a-82a0-4bc0-bc27-e47f58dae6aa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3385630a-82a0-4bc0-bc27-e47f58dae6aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a3d7fc340f84c5699757971056327c6', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ae37cf0-1299-46cf-b395-923c28994d4e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=279b155b-925e-4e21-8183-881a24ef45a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:31.235 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 279b155b-925e-4e21-8183-881a24ef45a3 in datapath 3385630a-82a0-4bc0-bc27-e47f58dae6aa bound to our chassis#033[00m Dec 5 05:13:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:31.237 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3385630a-82a0-4bc0-bc27-e47f58dae6aa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:13:31 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:31.237 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[07440b8b-602a-403b-95f8-455f0b01d3b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost ovn_controller[153000]: 2025-12-05T10:13:31Z|00328|binding|INFO|Setting lport 279b155b-925e-4e21-8183-881a24ef45a3 ovn-installed in OVS Dec 5 05:13:31 localhost ovn_controller[153000]: 2025-12-05T10:13:31Z|00329|binding|INFO|Setting lport 279b155b-925e-4e21-8183-881a24ef45a3 up in Southbound Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost nova_compute[280228]: 2025-12-05 10:13:31.269 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost journal[228791]: ethtool ioctl error on tap279b155b-92: No such device Dec 5 05:13:31 localhost nova_compute[280228]: 2025-12-05 10:13:31.306 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:31 localhost nova_compute[280228]: 2025-12-05 10:13:31.339 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 195 MiB data, 964 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 31 KiB/s wr, 90 op/s Dec 5 05:13:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "format": "json"}]: dispatch Dec 5 05:13:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:13:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:13:31 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:13:31.885+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '030e48d9-b1fa-45e4-b8cc-7454a5654e2b' of type subvolume Dec 5 05:13:31 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '030e48d9-b1fa-45e4-b8cc-7454a5654e2b' of type subvolume Dec 5 05:13:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "030e48d9-b1fa-45e4-b8cc-7454a5654e2b", "force": true, "format": "json"}]: dispatch Dec 5 05:13:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/030e48d9-b1fa-45e4-b8cc-7454a5654e2b'' moved to trashcan Dec 5 05:13:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:13:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:030e48d9-b1fa-45e4-b8cc-7454a5654e2b, vol_name:cephfs) < "" Dec 5 05:13:32 localhost podman[320949]: Dec 5 05:13:32 localhost podman[320949]: 2025-12-05 10:13:32.173067228 +0000 UTC m=+0.072078307 container create a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:13:32 localhost systemd[1]: Started libpod-conmon-a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788.scope. Dec 5 05:13:32 localhost podman[320949]: 2025-12-05 10:13:32.132905779 +0000 UTC m=+0.031916908 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:13:32 localhost systemd[1]: Started libcrun container. Dec 5 05:13:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b42875ddd93f4fcbae402820bb8fca039c278482d6b3d2236daac0e780843/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:13:32 localhost podman[320949]: 2025-12-05 10:13:32.249308143 +0000 UTC m=+0.148319202 container init a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:13:32 localhost podman[320949]: 2025-12-05 10:13:32.257855475 +0000 UTC m=+0.156866534 container start a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:13:32 localhost dnsmasq[320967]: started, version 2.85 cachesize 150 Dec 5 05:13:32 localhost dnsmasq[320967]: DNS service limited to local subnets Dec 5 05:13:32 localhost dnsmasq[320967]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:13:32 localhost dnsmasq[320967]: warning: no upstream servers configured Dec 5 05:13:32 localhost dnsmasq-dhcp[320967]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:13:32 localhost dnsmasq[320967]: read /var/lib/neutron/dhcp/3385630a-82a0-4bc0-bc27-e47f58dae6aa/addn_hosts - 0 addresses Dec 5 05:13:32 localhost dnsmasq-dhcp[320967]: read /var/lib/neutron/dhcp/3385630a-82a0-4bc0-bc27-e47f58dae6aa/host Dec 5 05:13:32 localhost dnsmasq-dhcp[320967]: read /var/lib/neutron/dhcp/3385630a-82a0-4bc0-bc27-e47f58dae6aa/opts Dec 5 05:13:32 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:32.474 261902 INFO neutron.agent.dhcp.agent [None req-d7d578b8-579e-404b-989b-1a4446124469 - - - - - -] DHCP configuration for ports {'17822ed0-bdf9-41a2-b861-1defc9fb8b9a'} is completed#033[00m Dec 5 05:13:32 localhost dnsmasq[320967]: exiting on receipt of SIGTERM Dec 5 05:13:32 localhost podman[320983]: 2025-12-05 10:13:32.649558297 +0000 UTC m=+0.062209285 container kill a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:13:32 localhost systemd[1]: libpod-a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788.scope: Deactivated successfully. Dec 5 05:13:32 localhost podman[320997]: 2025-12-05 10:13:32.717136137 +0000 UTC m=+0.055771078 container died a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:13:32 localhost podman[320997]: 2025-12-05 10:13:32.75348982 +0000 UTC m=+0.092124771 container cleanup a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:13:32 localhost systemd[1]: libpod-conmon-a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788.scope: Deactivated successfully. Dec 5 05:13:32 localhost podman[320999]: 2025-12-05 10:13:32.799587111 +0000 UTC m=+0.127915838 container remove a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3385630a-82a0-4bc0-bc27-e47f58dae6aa, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:13:32 localhost ovn_controller[153000]: 2025-12-05T10:13:32Z|00330|binding|INFO|Releasing lport 279b155b-925e-4e21-8183-881a24ef45a3 from this chassis (sb_readonly=0) Dec 5 05:13:32 localhost ovn_controller[153000]: 2025-12-05T10:13:32Z|00331|binding|INFO|Setting lport 279b155b-925e-4e21-8183-881a24ef45a3 down in Southbound Dec 5 05:13:32 localhost nova_compute[280228]: 2025-12-05 10:13:32.811 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:32 localhost kernel: device tap279b155b-92 left promiscuous mode Dec 5 05:13:32 localhost nova_compute[280228]: 2025-12-05 10:13:32.841 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:32.866 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-3385630a-82a0-4bc0-bc27-e47f58dae6aa', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3385630a-82a0-4bc0-bc27-e47f58dae6aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a3d7fc340f84c5699757971056327c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ae37cf0-1299-46cf-b395-923c28994d4e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=279b155b-925e-4e21-8183-881a24ef45a3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:32.868 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 279b155b-925e-4e21-8183-881a24ef45a3 in datapath 3385630a-82a0-4bc0-bc27-e47f58dae6aa unbound from our chassis#033[00m Dec 5 05:13:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:32.870 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3385630a-82a0-4bc0-bc27-e47f58dae6aa or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:13:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:32.871 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[ced32a43-0ed1-4ffa-88df-fa2676a26b6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:33 localhost systemd[1]: var-lib-containers-storage-overlay-dc8b42875ddd93f4fcbae402820bb8fca039c278482d6b3d2236daac0e780843-merged.mount: Deactivated successfully. Dec 5 05:13:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a95d48e64afc37496df50bd6948b48b2dd54a2849670927560eb8fe0f508f788-userdata-shm.mount: Deactivated successfully. Dec 5 05:13:33 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:33.224 261902 INFO neutron.agent.dhcp.agent [None req-1f7ccf73-488e-4b09-a61e-84644273a1d1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:33 localhost systemd[1]: run-netns-qdhcp\x2d3385630a\x2d82a0\x2d4bc0\x2dbc27\x2de47f58dae6aa.mount: Deactivated successfully. Dec 5 05:13:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 29 KiB/s wr, 85 op/s Dec 5 05:13:33 localhost nova_compute[280228]: 2025-12-05 10:13:33.659 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:34 localhost nova_compute[280228]: 2025-12-05 10:13:34.803 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 29 KiB/s wr, 85 op/s Dec 5 05:13:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e185 do_prune osdmap full prune enabled Dec 5 05:13:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e186 e186: 6 total, 6 up, 6 in Dec 5 05:13:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e186: 6 total, 6 up, 6 in Dec 5 05:13:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:13:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:13:37 localhost systemd[1]: tmp-crun.HdGiUv.mount: Deactivated successfully. Dec 5 05:13:37 localhost podman[321029]: 2025-12-05 10:13:37.216506478 +0000 UTC m=+0.100836349 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:37 localhost systemd[1]: tmp-crun.UDVWhk.mount: Deactivated successfully. Dec 5 05:13:37 localhost podman[321030]: 2025-12-05 10:13:37.260932318 +0000 UTC m=+0.141690670 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git) Dec 5 05:13:37 localhost podman[321030]: 2025-12-05 10:13:37.27308165 +0000 UTC m=+0.153840012 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9) Dec 5 05:13:37 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:13:37 localhost podman[321029]: 2025-12-05 10:13:37.32858999 +0000 UTC m=+0.212919811 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:13:37 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:13:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 32 KiB/s wr, 29 op/s Dec 5 05:13:37 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:37.540 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:37 localhost ovn_controller[153000]: 2025-12-05T10:13:37Z|00332|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:13:37 localhost nova_compute[280228]: 2025-12-05 10:13:37.877 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:38.463 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.704 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:38.849 261902 INFO neutron.agent.linux.ip_lib [None req-f021bb28-add6-42c1-b1e4-1147c75eb336 - - - - - -] Device tapfed803bf-31 cannot be used as it has no MAC address#033[00m Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.865 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost kernel: device tapfed803bf-31 entered promiscuous mode Dec 5 05:13:38 localhost NetworkManager[5960]: [1764929618.8714] manager: (tapfed803bf-31): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Dec 5 05:13:38 localhost ovn_controller[153000]: 2025-12-05T10:13:38Z|00333|binding|INFO|Claiming lport fed803bf-310c-4e10-92a4-0abe69409477 for this chassis. Dec 5 05:13:38 localhost ovn_controller[153000]: 2025-12-05T10:13:38Z|00334|binding|INFO|fed803bf-310c-4e10-92a4-0abe69409477: Claiming unknown Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.871 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost systemd-udevd[321079]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.885 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:38.885 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-5bc3eae2-e440-44d6-9e59-d821a26fc86b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bc3eae2-e440-44d6-9e59-d821a26fc86b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09003135-ef32-48fd-9729-1b91ee2900fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fed803bf-310c-4e10-92a4-0abe69409477) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:38.886 158820 INFO neutron.agent.ovn.metadata.agent [-] Port fed803bf-310c-4e10-92a4-0abe69409477 in datapath 5bc3eae2-e440-44d6-9e59-d821a26fc86b bound to our chassis#033[00m Dec 5 05:13:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:38.888 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5bc3eae2-e440-44d6-9e59-d821a26fc86b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:13:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:38.889 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[99636e5e-03d2-4bb8-b79a-129231bb5d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost ovn_controller[153000]: 2025-12-05T10:13:38Z|00335|binding|INFO|Setting lport fed803bf-310c-4e10-92a4-0abe69409477 ovn-installed in OVS Dec 5 05:13:38 localhost ovn_controller[153000]: 2025-12-05T10:13:38Z|00336|binding|INFO|Setting lport fed803bf-310c-4e10-92a4-0abe69409477 up in Southbound Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.903 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost journal[228791]: ethtool ioctl error on tapfed803bf-31: No such device Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.936 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:38 localhost nova_compute[280228]: 2025-12-05 10:13:38.966 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 30 KiB/s wr, 27 op/s Dec 5 05:13:39 localhost nova_compute[280228]: 2025-12-05 10:13:39.816 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:39 localhost podman[321149]: Dec 5 05:13:39 localhost podman[321149]: 2025-12-05 10:13:39.840330924 +0000 UTC m=+0.086171559 container create fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 5 05:13:39 localhost systemd[1]: Started libpod-conmon-fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4.scope. Dec 5 05:13:39 localhost systemd[1]: Started libcrun container. Dec 5 05:13:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1615f6214a88925063869f56637817cf28613a82a3992da3bacacf4c29fe02f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:13:39 localhost podman[321149]: 2025-12-05 10:13:39.806846618 +0000 UTC m=+0.052687283 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:13:39 localhost podman[321149]: 2025-12-05 10:13:39.91204595 +0000 UTC m=+0.157886595 container init fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:13:39 localhost podman[321149]: 2025-12-05 10:13:39.921091846 +0000 UTC m=+0.166932491 container start fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:13:39 localhost dnsmasq[321167]: started, version 2.85 cachesize 150 Dec 5 05:13:39 localhost dnsmasq[321167]: DNS service limited to local subnets Dec 5 05:13:39 localhost dnsmasq[321167]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:13:39 localhost dnsmasq[321167]: warning: no upstream servers configured Dec 5 05:13:39 localhost dnsmasq-dhcp[321167]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:13:39 localhost dnsmasq[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/addn_hosts - 0 addresses Dec 5 05:13:39 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/host Dec 5 05:13:39 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/opts Dec 5 05:13:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:40.134 261902 INFO neutron.agent.dhcp.agent [None req-4f56f219-9544-4675-8bed-0afe9a6c0b0f - - - - - -] DHCP configuration for ports {'c3d7d2a0-bafb-49e5-9fdc-1e05fcc120be'} is completed#033[00m Dec 5 05:13:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 16 KiB/s wr, 2 op/s Dec 5 05:13:42 localhost neutron_sriov_agent[254996]: 2025-12-05 10:13:42.250 2 INFO neutron.agent.securitygroups_rpc [None req-87213855-0084-4785-a016-e26749e5f546 44355f1bf7d041b79ae4db9ce4fe218d ecb85ff3c88d49d6b771a6e34a36ee4c - - default default] Security group member updated ['74a3c9d8-4ec2-42ae-8d2d-0e9d9384fe30']#033[00m Dec 5 05:13:42 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:42.501 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0f1cf428-600d-4573-bfaf-e7d3b92c232d, ip_allocation=immediate, mac_address=fa:16:3e:0c:e0:94, name=tempest-RoutersTest-816465122, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:13:36Z, description=, dns_domain=, id=5bc3eae2-e440-44d6-9e59-d821a26fc86b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1301432588, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5711, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2688, status=ACTIVE, subnets=['8eb08474-b4d2-4ac4-a435-e49fe1d0a764'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:37Z, vlan_transparent=None, network_id=5bc3eae2-e440-44d6-9e59-d821a26fc86b, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['74a3c9d8-4ec2-42ae-8d2d-0e9d9384fe30'], standard_attr_id=2724, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:42Z on network 5bc3eae2-e440-44d6-9e59-d821a26fc86b#033[00m Dec 5 05:13:42 localhost dnsmasq[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/addn_hosts - 1 addresses Dec 5 05:13:42 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/host Dec 5 05:13:42 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/opts Dec 5 05:13:42 localhost podman[321185]: 2025-12-05 10:13:42.704451898 +0000 UTC m=+0.049776085 container kill fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:13:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:43.009 261902 INFO neutron.agent.dhcp.agent [None req-12667228-e666-4054-a050-5d4ae487b594 - - - - - -] DHCP configuration for ports {'0f1cf428-600d-4573-bfaf-e7d3b92c232d'} is completed#033[00m Dec 5 05:13:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 16 KiB/s wr, 1 op/s Dec 5 05:13:43 localhost nova_compute[280228]: 2025-12-05 10:13:43.738 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:44 localhost ovn_controller[153000]: 2025-12-05T10:13:44Z|00337|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:13:44 localhost nova_compute[280228]: 2025-12-05 10:13:44.810 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:44 localhost nova_compute[280228]: 2025-12-05 10:13:44.818 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:13:45 Dec 5 05:13:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:13:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:13:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['images', 'manila_data', 'volumes', 'manila_metadata', '.mgr', 'backups', 'vms'] Dec 5 05:13:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:13:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:13:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:13:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:13:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:13:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:13:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:13:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 195 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 16 KiB/s wr, 1 op/s Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014852910385259631 of space, bias 1.0, pg target 0.29656311069235064 quantized to 32 (current 32) Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:13:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00025345550088404987 of space, bias 4.0, pg target 0.2017505787037037 quantized to 16 (current 16) Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:13:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:13:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e186 do_prune osdmap full prune enabled Dec 5 05:13:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e187 e187: 6 total, 6 up, 6 in Dec 5 05:13:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e187: 6 total, 6 up, 6 in Dec 5 05:13:46 localhost podman[321222]: 2025-12-05 10:13:46.086913072 +0000 UTC m=+0.065903748 container kill 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 5 05:13:46 localhost dnsmasq[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/addn_hosts - 0 addresses Dec 5 05:13:46 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/host Dec 5 05:13:46 localhost dnsmasq-dhcp[320552]: read /var/lib/neutron/dhcp/2bfd3442-9afd-4211-8b18-aa6d09646799/opts Dec 5 05:13:46 localhost nova_compute[280228]: 2025-12-05 10:13:46.279 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:46 localhost ovn_controller[153000]: 2025-12-05T10:13:46Z|00338|binding|INFO|Releasing lport eac32ec1-842e-49cc-a28e-3a1959ae92a7 from this chassis (sb_readonly=0) Dec 5 05:13:46 localhost ovn_controller[153000]: 2025-12-05T10:13:46Z|00339|binding|INFO|Setting lport eac32ec1-842e-49cc-a28e-3a1959ae92a7 down in Southbound Dec 5 05:13:46 localhost kernel: device tapeac32ec1-84 left promiscuous mode Dec 5 05:13:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:46.292 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-2bfd3442-9afd-4211-8b18-aa6d09646799', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bfd3442-9afd-4211-8b18-aa6d09646799', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc2806be-6029-46c5-8fbe-8ef6f0f90cb2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=eac32ec1-842e-49cc-a28e-3a1959ae92a7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:46.294 158820 INFO neutron.agent.ovn.metadata.agent [-] Port eac32ec1-842e-49cc-a28e-3a1959ae92a7 in datapath 2bfd3442-9afd-4211-8b18-aa6d09646799 unbound from our chassis#033[00m Dec 5 05:13:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:46.297 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2bfd3442-9afd-4211-8b18-aa6d09646799 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:13:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:46.297 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b2878402-0967-4817-955b-aed32db6b2f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:46 localhost nova_compute[280228]: 2025-12-05 10:13:46.298 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:47.104 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:13:41Z, description=, device_id=c20dac33-283f-45e5-9561-6a1c7919610c, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0f1cf428-600d-4573-bfaf-e7d3b92c232d, ip_allocation=immediate, mac_address=fa:16:3e:0c:e0:94, name=tempest-RoutersTest-816465122, network_id=5bc3eae2-e440-44d6-9e59-d821a26fc86b, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['74a3c9d8-4ec2-42ae-8d2d-0e9d9384fe30'], standard_attr_id=2724, status=ACTIVE, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:13:44Z on network 5bc3eae2-e440-44d6-9e59-d821a26fc86b#033[00m Dec 5 05:13:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 195 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s rd, 9.1 KiB/s wr, 12 op/s Dec 5 05:13:47 localhost podman[321277]: 2025-12-05 10:13:47.548588346 +0000 UTC m=+0.050898889 container kill 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:13:47 localhost dnsmasq[320552]: exiting on receipt of SIGTERM Dec 5 05:13:47 localhost systemd[1]: libpod-52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf.scope: Deactivated successfully. Dec 5 05:13:47 localhost dnsmasq[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/addn_hosts - 1 addresses Dec 5 05:13:47 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/host Dec 5 05:13:47 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/opts Dec 5 05:13:47 localhost podman[321284]: 2025-12-05 10:13:47.583775033 +0000 UTC m=+0.070463129 container kill fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:13:47 localhost podman[321303]: 2025-12-05 10:13:47.62317151 +0000 UTC m=+0.055218032 container died 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:13:47 localhost systemd[1]: tmp-crun.Ru853E.mount: Deactivated successfully. Dec 5 05:13:47 localhost podman[321303]: 2025-12-05 10:13:47.657834261 +0000 UTC m=+0.089880713 container cleanup 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:47 localhost systemd[1]: libpod-conmon-52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf.scope: Deactivated successfully. Dec 5 05:13:47 localhost podman[321305]: 2025-12-05 10:13:47.710505174 +0000 UTC m=+0.132115726 container remove 52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2bfd3442-9afd-4211-8b18-aa6d09646799, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:13:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:47.896 261902 INFO neutron.agent.dhcp.agent [None req-d295e067-ecda-4ae2-b764-11d1c00dba3f - - - - - -] DHCP configuration for ports {'0f1cf428-600d-4573-bfaf-e7d3b92c232d'} is completed#033[00m Dec 5 05:13:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:13:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' Dec 5 05:13:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta' Dec 5 05:13:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "format": "json"}]: dispatch Dec 5 05:13:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:13:48 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:13:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:48.065 261902 INFO neutron.agent.dhcp.agent [None req-59df7925-637a-4c12-ae1f-65f0ca93972a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:48.066 261902 INFO neutron.agent.dhcp.agent [None req-59df7925-637a-4c12-ae1f-65f0ca93972a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:13:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:13:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:13:48 localhost podman[321341]: 2025-12-05 10:13:48.520739021 +0000 UTC m=+0.153413608 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:13:48 localhost podman[321341]: 2025-12-05 10:13:48.531393227 +0000 UTC m=+0.164067804 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:13:48 localhost systemd[1]: var-lib-containers-storage-overlay-bf3b21ca4403adf2108940b553ec1b91860e530bc18d9cd853ff9e2e988f8118-merged.mount: Deactivated successfully. Dec 5 05:13:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52e04fb621d6ab3f95139e4813a007abd9f8176e817238c155f0ec7984c04fdf-userdata-shm.mount: Deactivated successfully. Dec 5 05:13:48 localhost systemd[1]: run-netns-qdhcp\x2d2bfd3442\x2d9afd\x2d4211\x2d8b18\x2daa6d09646799.mount: Deactivated successfully. Dec 5 05:13:48 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:13:48 localhost podman[321343]: 2025-12-05 10:13:48.484410529 +0000 UTC m=+0.106445361 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true) Dec 5 05:13:48 localhost podman[321343]: 2025-12-05 10:13:48.614435009 +0000 UTC m=+0.236469821 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm) Dec 5 05:13:48 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:13:48 localhost podman[321342]: 2025-12-05 10:13:48.620458025 +0000 UTC m=+0.250578004 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:48 localhost podman[321342]: 2025-12-05 10:13:48.700843055 +0000 UTC m=+0.330963054 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:13:48 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:13:48 localhost nova_compute[280228]: 2025-12-05 10:13:48.758 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:48.793 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:13:49.284 2 INFO neutron.agent.securitygroups_rpc [None req-71635bc4-7947-4b62-9354-b7ffa3a8d0a7 44355f1bf7d041b79ae4db9ce4fe218d ecb85ff3c88d49d6b771a6e34a36ee4c - - default default] Security group member updated ['74a3c9d8-4ec2-42ae-8d2d-0e9d9384fe30']#033[00m Dec 5 05:13:49 localhost ovn_controller[153000]: 2025-12-05T10:13:49Z|00340|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:13:49 localhost nova_compute[280228]: 2025-12-05 10:13:49.343 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 195 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 8.4 KiB/s rd, 9.1 KiB/s wr, 12 op/s Dec 5 05:13:49 localhost dnsmasq[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/addn_hosts - 0 addresses Dec 5 05:13:49 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/host Dec 5 05:13:49 localhost podman[321417]: 2025-12-05 10:13:49.610779636 +0000 UTC m=+0.065048643 container kill fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:13:49 localhost dnsmasq-dhcp[321167]: read /var/lib/neutron/dhcp/5bc3eae2-e440-44d6-9e59-d821a26fc86b/opts Dec 5 05:13:49 localhost nova_compute[280228]: 2025-12-05 10:13:49.846 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:49 localhost podman[239519]: time="2025-12-05T10:13:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:13:49 localhost podman[239519]: @ - - [05/Dec/2025:10:13:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159752 "" "Go-http-client/1.1" Dec 5 05:13:49 localhost podman[239519]: @ - - [05/Dec/2025:10:13:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20217 "" "Go-http-client/1.1" Dec 5 05:13:50 localhost nova_compute[280228]: 2025-12-05 10:13:50.737 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:50 localhost ovn_controller[153000]: 2025-12-05T10:13:50Z|00341|binding|INFO|Releasing lport fed803bf-310c-4e10-92a4-0abe69409477 from this chassis (sb_readonly=0) Dec 5 05:13:50 localhost ovn_controller[153000]: 2025-12-05T10:13:50Z|00342|binding|INFO|Setting lport fed803bf-310c-4e10-92a4-0abe69409477 down in Southbound Dec 5 05:13:50 localhost kernel: device tapfed803bf-31 left promiscuous mode Dec 5 05:13:50 localhost nova_compute[280228]: 2025-12-05 10:13:50.766 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:51.044 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-5bc3eae2-e440-44d6-9e59-d821a26fc86b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5bc3eae2-e440-44d6-9e59-d821a26fc86b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09003135-ef32-48fd-9729-1b91ee2900fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fed803bf-310c-4e10-92a4-0abe69409477) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:51.046 158820 INFO neutron.agent.ovn.metadata.agent [-] Port fed803bf-310c-4e10-92a4-0abe69409477 in datapath 5bc3eae2-e440-44d6-9e59-d821a26fc86b unbound from our chassis#033[00m Dec 5 05:13:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:51.048 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5bc3eae2-e440-44d6-9e59-d821a26fc86b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:13:51 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:51.049 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[68b5e9f9-e302-416c-8fc8-f6617df49961]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 35 op/s Dec 5 05:13:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "1adf68f6-3438-4b98-a816-a549a3420ad9", "format": "json"}]: dispatch Dec 5 05:13:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1adf68f6-3438-4b98-a816-a549a3420ad9, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1adf68f6-3438-4b98-a816-a549a3420ad9, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:53 localhost dnsmasq[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/addn_hosts - 0 addresses Dec 5 05:13:53 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/host Dec 5 05:13:53 localhost dnsmasq-dhcp[320308]: read /var/lib/neutron/dhcp/83d53f2e-0ab1-46e4-96eb-f037249fb0f6/opts Dec 5 05:13:53 localhost podman[321458]: 2025-12-05 10:13:53.281481874 +0000 UTC m=+0.064141525 container kill 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 35 op/s Dec 5 05:13:53 localhost ovn_controller[153000]: 2025-12-05T10:13:53Z|00343|binding|INFO|Releasing lport 30c52cca-3eb1-429c-b9d7-c0d9ea12afab from this chassis (sb_readonly=0) Dec 5 05:13:53 localhost kernel: device tap30c52cca-3e left promiscuous mode Dec 5 05:13:53 localhost ovn_controller[153000]: 2025-12-05T10:13:53Z|00344|binding|INFO|Setting lport 30c52cca-3eb1-429c-b9d7-c0d9ea12afab down in Southbound Dec 5 05:13:53 localhost nova_compute[280228]: 2025-12-05 10:13:53.516 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:53 localhost nova_compute[280228]: 2025-12-05 10:13:53.543 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:53.744 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-83d53f2e-0ab1-46e4-96eb-f037249fb0f6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83d53f2e-0ab1-46e4-96eb-f037249fb0f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d15dccf4c864d558d055b0c7cd1cccc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3eaffe03-865a-40fb-8965-3f8439694159, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=30c52cca-3eb1-429c-b9d7-c0d9ea12afab) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:13:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:53.745 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 30c52cca-3eb1-429c-b9d7-c0d9ea12afab in datapath 83d53f2e-0ab1-46e4-96eb-f037249fb0f6 unbound from our chassis#033[00m Dec 5 05:13:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:53.746 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 83d53f2e-0ab1-46e4-96eb-f037249fb0f6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:13:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:13:53.746 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[56e1cede-7140-4e92-a010-0cce1e0aa6b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:13:53 localhost nova_compute[280228]: 2025-12-05 10:13:53.761 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:54 localhost dnsmasq[321167]: exiting on receipt of SIGTERM Dec 5 05:13:54 localhost podman[321497]: 2025-12-05 10:13:54.043355051 +0000 UTC m=+0.060329267 container kill fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:13:54 localhost systemd[1]: libpod-fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4.scope: Deactivated successfully. Dec 5 05:13:54 localhost podman[321511]: 2025-12-05 10:13:54.118579805 +0000 UTC m=+0.059406010 container died fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:13:54 localhost podman[321511]: 2025-12-05 10:13:54.155150164 +0000 UTC m=+0.095976329 container cleanup fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:13:54 localhost systemd[1]: libpod-conmon-fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4.scope: Deactivated successfully. Dec 5 05:13:54 localhost podman[321512]: 2025-12-05 10:13:54.194607542 +0000 UTC m=+0.129135854 container remove fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5bc3eae2-e440-44d6-9e59-d821a26fc86b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:13:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:54.239 261902 INFO neutron.agent.dhcp.agent [None req-727052b8-fc3d-4176-a6f9-f685754ee76f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:54 localhost systemd[1]: var-lib-containers-storage-overlay-a1615f6214a88925063869f56637817cf28613a82a3992da3bacacf4c29fe02f-merged.mount: Deactivated successfully. Dec 5 05:13:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa020ba9f1f961d189e1caa96c4bf0332c16f14360c27a05956ff1b576c042c4-userdata-shm.mount: Deactivated successfully. Dec 5 05:13:54 localhost systemd[1]: run-netns-qdhcp\x2d5bc3eae2\x2de440\x2d44d6\x2d9e59\x2dd821a26fc86b.mount: Deactivated successfully. Dec 5 05:13:54 localhost podman[321555]: 2025-12-05 10:13:54.424945156 +0000 UTC m=+0.060623598 container kill 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:13:54 localhost dnsmasq[320308]: exiting on receipt of SIGTERM Dec 5 05:13:54 localhost systemd[1]: libpod-4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1.scope: Deactivated successfully. Dec 5 05:13:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:54.489 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:54 localhost podman[321570]: 2025-12-05 10:13:54.502378066 +0000 UTC m=+0.055892863 container died 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:13:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1-userdata-shm.mount: Deactivated successfully. Dec 5 05:13:54 localhost podman[321570]: 2025-12-05 10:13:54.546230119 +0000 UTC m=+0.099744876 container remove 4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-83d53f2e-0ab1-46e4-96eb-f037249fb0f6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:13:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:54.592 261902 INFO neutron.agent.dhcp.agent [None req-a21e3082-a867-4cf6-afb1-82dbb412fff8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:54 localhost systemd[1]: libpod-conmon-4c89c6ab1ca2ff24f354d80dd9904d5984bf447d6c1d0391831574555944e9f1.scope: Deactivated successfully. Dec 5 05:13:54 localhost ovn_controller[153000]: 2025-12-05T10:13:54Z|00345|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:13:54 localhost nova_compute[280228]: 2025-12-05 10:13:54.881 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:13:54.917 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:13:55 localhost systemd[1]: var-lib-containers-storage-overlay-5000aeb28f302d7fab8313f0cbb340a1a8057c6efc78afef3ecb78b8b0d71129-merged.mount: Deactivated successfully. Dec 5 05:13:55 localhost systemd[1]: run-netns-qdhcp\x2d83d53f2e\x2d0ab1\x2d46e4\x2d96eb\x2df037249fb0f6.mount: Deactivated successfully. Dec 5 05:13:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 35 op/s Dec 5 05:13:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e187 do_prune osdmap full prune enabled Dec 5 05:13:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e188 e188: 6 total, 6 up, 6 in Dec 5 05:13:55 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e188: 6 total, 6 up, 6 in Dec 5 05:13:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:13:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:13:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:13:56 localhost podman[321596]: 2025-12-05 10:13:56.208783032 +0000 UTC m=+0.086644534 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:13:56 localhost podman[321597]: 2025-12-05 10:13:56.285160161 +0000 UTC m=+0.160159426 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:13:56 localhost podman[321596]: 2025-12-05 10:13:56.293600399 +0000 UTC m=+0.171461901 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible) Dec 5 05:13:56 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:13:56 localhost podman[321597]: 2025-12-05 10:13:56.321660349 +0000 UTC m=+0.196659584 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:13:56 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:13:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "6db860ce-42d9-4edb-b0ff-07bd3a36139e", "format": "json"}]: dispatch Dec 5 05:13:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6db860ce-42d9-4edb-b0ff-07bd3a36139e, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6db860ce-42d9-4edb-b0ff-07bd3a36139e, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:13:57 localhost openstack_network_exporter[241668]: ERROR 10:13:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:13:57 localhost openstack_network_exporter[241668]: ERROR 10:13:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:13:57 localhost openstack_network_exporter[241668]: ERROR 10:13:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:13:57 localhost openstack_network_exporter[241668]: ERROR 10:13:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:13:57 localhost openstack_network_exporter[241668]: Dec 5 05:13:57 localhost openstack_network_exporter[241668]: ERROR 10:13:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:13:57 localhost openstack_network_exporter[241668]: Dec 5 05:13:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 33 op/s Dec 5 05:13:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e188 do_prune osdmap full prune enabled Dec 5 05:13:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e189 e189: 6 total, 6 up, 6 in Dec 5 05:13:57 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e189: 6 total, 6 up, 6 in Dec 5 05:13:58 localhost nova_compute[280228]: 2025-12-05 10:13:58.764 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:13:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 195 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 8.6 KiB/s rd, 7.1 KiB/s wr, 13 op/s Dec 5 05:13:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e189 do_prune osdmap full prune enabled Dec 5 05:13:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e190 e190: 6 total, 6 up, 6 in Dec 5 05:13:59 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e190: 6 total, 6 up, 6 in Dec 5 05:13:59 localhost nova_compute[280228]: 2025-12-05 10:13:59.884 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e190 do_prune osdmap full prune enabled Dec 5 05:14:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e191 e191: 6 total, 6 up, 6 in Dec 5 05:14:00 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e191: 6 total, 6 up, 6 in Dec 5 05:14:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "6db860ce-42d9-4edb-b0ff-07bd3a36139e_3724db05-5f8c-43ab-b696-20bf24668fa6", "force": true, "format": "json"}]: dispatch Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6db860ce-42d9-4edb-b0ff-07bd3a36139e_3724db05-5f8c-43ab-b696-20bf24668fa6, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta' Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6db860ce-42d9-4edb-b0ff-07bd3a36139e_3724db05-5f8c-43ab-b696-20bf24668fa6, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "6db860ce-42d9-4edb-b0ff-07bd3a36139e", "force": true, "format": "json"}]: dispatch Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6db860ce-42d9-4edb-b0ff-07bd3a36139e, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta' Dec 5 05:14:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6db860ce-42d9-4edb-b0ff-07bd3a36139e, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 196 MiB data, 995 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 38 KiB/s wr, 78 op/s Dec 5 05:14:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:14:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/542252184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:14:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:14:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/542252184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:14:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 28 KiB/s wr, 62 op/s Dec 5 05:14:03 localhost nova_compute[280228]: 2025-12-05 10:14:03.769 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:03.920 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:14:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:03.920 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:14:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:03.921 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:14:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "1adf68f6-3438-4b98-a816-a549a3420ad9_7a19f9f1-6dc0-4639-89b3-f05e8a347b46", "force": true, "format": "json"}]: dispatch Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1adf68f6-3438-4b98-a816-a549a3420ad9_7a19f9f1-6dc0-4639-89b3-f05e8a347b46, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta' Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1adf68f6-3438-4b98-a816-a549a3420ad9_7a19f9f1-6dc0-4639-89b3-f05e8a347b46, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "snap_name": "1adf68f6-3438-4b98-a816-a549a3420ad9", "force": true, "format": "json"}]: dispatch Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1adf68f6-3438-4b98-a816-a549a3420ad9, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta.tmp' to config b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b/.meta' Dec 5 05:14:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1adf68f6-3438-4b98-a816-a549a3420ad9, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e191 do_prune osdmap full prune enabled Dec 5 05:14:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e192 e192: 6 total, 6 up, 6 in Dec 5 05:14:04 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e192: 6 total, 6 up, 6 in Dec 5 05:14:04 localhost nova_compute[280228]: 2025-12-05 10:14:04.920 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 28 KiB/s wr, 62 op/s Dec 5 05:14:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e192 do_prune osdmap full prune enabled Dec 5 05:14:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e193 e193: 6 total, 6 up, 6 in Dec 5 05:14:05 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e193: 6 total, 6 up, 6 in Dec 5 05:14:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e193 do_prune osdmap full prune enabled Dec 5 05:14:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e194 e194: 6 total, 6 up, 6 in Dec 5 05:14:05 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e194: 6 total, 6 up, 6 in Dec 5 05:14:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 25 KiB/s wr, 43 op/s Dec 5 05:14:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "format": "json"}]: dispatch Dec 5 05:14:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3c225fb1-348e-4898-b9d5-58a36c40826b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:14:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3c225fb1-348e-4898-b9d5-58a36c40826b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:14:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:14:07.507+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c225fb1-348e-4898-b9d5-58a36c40826b' of type subvolume Dec 5 05:14:07 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c225fb1-348e-4898-b9d5-58a36c40826b' of type subvolume Dec 5 05:14:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c225fb1-348e-4898-b9d5-58a36c40826b", "force": true, "format": "json"}]: dispatch Dec 5 05:14:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3c225fb1-348e-4898-b9d5-58a36c40826b'' moved to trashcan Dec 5 05:14:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:14:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c225fb1-348e-4898-b9d5-58a36c40826b, vol_name:cephfs) < "" Dec 5 05:14:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:14:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:14:08 localhost podman[321646]: 2025-12-05 10:14:08.219702942 +0000 UTC m=+0.099476457 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64) Dec 5 05:14:08 localhost podman[321645]: 2025-12-05 10:14:08.190100756 +0000 UTC m=+0.077792793 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:14:08 localhost podman[321646]: 2025-12-05 10:14:08.262774681 +0000 UTC m=+0.142548166 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350) Dec 5 05:14:08 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:14:08 localhost podman[321645]: 2025-12-05 10:14:08.32054859 +0000 UTC m=+0.208240667 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:08 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:14:08 localhost nova_compute[280228]: 2025-12-05 10:14:08.770 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 196 MiB data, 996 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 25 KiB/s wr, 41 op/s Dec 5 05:14:09 localhost nova_compute[280228]: 2025-12-05 10:14:09.557 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:09 localhost nova_compute[280228]: 2025-12-05 10:14:09.924 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:14:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:14:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:14:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:14:10 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 83242e52-4c01-4d56-8a67-169c3615c4fb (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:14:10 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 83242e52-4c01-4d56-8a67-169c3615c4fb (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:14:10 localhost ceph-mgr[286454]: [progress INFO root] Completed event 83242e52-4c01-4d56-8a67-169c3615c4fb (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:14:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:14:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e194 do_prune osdmap full prune enabled Dec 5 05:14:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e195 e195: 6 total, 6 up, 6 in Dec 5 05:14:10 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e195: 6 total, 6 up, 6 in Dec 5 05:14:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 51 KiB/s wr, 104 op/s Dec 5 05:14:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e195 do_prune osdmap full prune enabled Dec 5 05:14:12 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:14:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e196 e196: 6 total, 6 up, 6 in Dec 5 05:14:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e196: 6 total, 6 up, 6 in Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.953 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.954 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.954 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.959 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a78be435-7def-4c01-a986-4fdf1c22a233', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:12.955057', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25aff328-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': 'c1b8f5c0c67de6ac7380fca5920b5738fa218bd25d9904a98c84c11f5449c9b2'}]}, 'timestamp': '2025-12-05 10:14:12.960453', '_unique_id': 'a2bb59ad069741c5834c0e105e692bbb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.963 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.975 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.975 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4563ab1a-430d-41f4-aa0f-9103a3f547b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:12.963454', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25b2528a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.137799331, 'message_signature': '9fdb861938bf6e3d38e7009595e578b8d7af7cc800dcb4f18dca8eafced15660'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:12.963454', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25b265ae-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.137799331, 'message_signature': 'a8078d95d532a75994295d3cd4aee0ca96606969b686f9da65d6fb992a876957'}]}, 'timestamp': '2025-12-05 10:14:12.976530', '_unique_id': 'c0a9cd3074674238b73916acf5a0affa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.977 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.979 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.979 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a9bcef92-3854-46c1-92c5-483fbd5daa6a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:12.979151', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25b2e4e8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': '984f12df0ced6faa57fc3f93efb28d7c8c39b4c8d9b9d7486930de819ca9508d'}]}, 'timestamp': '2025-12-05 10:14:12.979668', '_unique_id': '124694e3256f48d793846b8e277b1ffb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.981 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.981 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a58f44e2-4397-458d-afca-43a1937cde19', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:12.981857', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25b34c8a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': 'b1459b3101c8cccd2835c94357457afcec758d7e13f6ab248e65f9ecfd475642'}]}, 'timestamp': '2025-12-05 10:14:12.982367', '_unique_id': '880172a15c504418aa400baa33781b95'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.983 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:12.984 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.013 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.014 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9e00aabd-a43b-4aaa-bc7d-8e808dd8c1d0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:12.984559', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25b82dfe-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'a07d3b29f1559e3dba7c7827fbc2f6c53b325000048f06103429242a22106a8b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:12.984559', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25b841fe-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': '47c5b7d5de35317864ab629f02c7d8c9f8af2cf09eb87708006540f89b11c149'}]}, 'timestamp': '2025-12-05 10:14:13.014787', '_unique_id': 'ebc5701faf964c6489c850af669f8a43'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.016 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.017 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.017 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.017 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '38d175de-7ec8-4563-a079-670d7bf82099', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.017397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25b8b8b4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': '0321e0b4c64ef24ca779a43942157203575279befe74b4692642de46056c0519'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.017397', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25b8c854-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': '43872daee3a7d43754770eee93cf783f62e934cdf2a8ac1389bf27b084ad079a'}]}, 'timestamp': '2025-12-05 10:14:13.018372', '_unique_id': '3c5f885975384c19abc04a835ddad233'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.020 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.021 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd39653c2-bb93-4216-941e-a64ec06c2dbe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.020618', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25b93654-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'a24d4ec5f158de8447d158f51fd63b692b35828b4ec37d09f8bed581f64df1cd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.020618', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25b945d6-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'd011a1b69e56eea6d7115882273652a270e3e45070c71735a38026d43454fa0f'}]}, 'timestamp': '2025-12-05 10:14:13.021459', '_unique_id': 'a66aaff894994e54b2162b30390a73e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.023 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.023 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '145e54dc-102a-48a4-8776-804a6948601c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.023677', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25b9ae2c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': 'c4d9d9b8674b89758de605eb872b30cefaf2bdec48d213cbb7cd8d948438ee37'}]}, 'timestamp': '2025-12-05 10:14:13.024134', '_unique_id': '0f123423540c40e58e7777546edea185'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.026 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.026 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.026 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b6b49c37-3295-4594-841b-d167027de9f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.026282', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25ba13e4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'aeca78e30c3f3ca9e07b2f773fe1de993032821778e6d1bccb9b1e01d3fee60d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.026282', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25ba2352-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': '969b3e7581480a91765bcf5455c4d52dee6e9c170c56754eca1afc77ea98cad0'}]}, 'timestamp': '2025-12-05 10:14:13.027141', '_unique_id': 'bea7b2dde92043dcb1d1faf82a22310f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.029 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8292aa2c-54b1-4208-ae92-7c4073f2b88f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.029384', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25ba8d10-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': '237416100eb10b191c2370a15606a1f29aa2eca15f917af65362d7807da28a41'}]}, 'timestamp': '2025-12-05 10:14:13.029840', '_unique_id': '24c4b63c21674f45a9daab1c98976813'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.032 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f6fd140-52d0-46fc-929b-c08bdbac4f67', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.032514', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25bb0ac4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': '4d2d9855f9bae1f5e6d27360f1982a575a979d06b4b1b677251b0da86d8f1a01'}]}, 'timestamp': '2025-12-05 10:14:13.033070', '_unique_id': '09a0092d0a6a4d9c8d61daafd187eb4d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.035 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.054 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a72a4e0c-f2b9-4174-a046-419ee782851a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:14:13.035626', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '25be5526-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.228174328, 'message_signature': 'ab75ee231b3edbe1fcd2131a0db7ce3e39f1da7acf61e836b37f0185c0fd64a5'}]}, 'timestamp': '2025-12-05 10:14:13.054659', '_unique_id': 'f3719ac52e4d46b6b26475e9855d2433'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '031f935c-4bfa-42f2-97d1-30643102e5e3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.057305', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25bed3ca-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.137799331, 'message_signature': 'b4e0d40c0ae0f5e06977db2c60bbcb4286c47c8b7113c313064abf8f822deed0'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.057305', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25bee702-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.137799331, 'message_signature': 'df5512188173c65d299677e125e1f1050f00ef57c71688f4d6fe1289e4650048'}]}, 'timestamp': '2025-12-05 10:14:13.058395', '_unique_id': 'b51dec65a2104088bbd0c2dca1bc824a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.059 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.060 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.060 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c1074b5c-bcf3-4412-a61d-fd4453a4f57e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.061109', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25bf69ca-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': 'e94e59a4f538a97729be13bbd996ef26093622eb0bcc39147ec51c9d0fbefa60'}]}, 'timestamp': '2025-12-05 10:14:13.061721', '_unique_id': '45b598f8af7547b69742400cfb7b24d4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.064 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4e4890bd-45b1-490e-909f-212c28f214fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.064399', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25bfe878-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': '5c554377daef3d402e38d29e9148a139fd42e6ebd04b0f392f420d46bf2d6a8b'}]}, 'timestamp': '2025-12-05 10:14:13.064992', '_unique_id': '331f1e9d334247198cf51c44a6b4d3bc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.066 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.067 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c837f4a-f049-48c8-851a-59bc3b3fb3dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.067506', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25c05eb6-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': 'eca34a1a7743500b12a777dad595ddba41ad450b544d61405fe3e9da37f82df1'}]}, 'timestamp': '2025-12-05 10:14:13.067980', '_unique_id': 'c61251e219794938aef2eaeb4e9dac75'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.070 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36ed5c63-e1ad-4bf3-8519-34e322f79227', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.070171', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25c0c7ac-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': '485aaae7e6a17f9263960da2fc71063800dcc64e09b6df6c8f68d0cb09dab7f5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.070171', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25c0d77e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'd16a927fdd8c5ba9795fae83d90e1ef201a89a5b2e881370cdb3d3aea0a9755e'}]}, 'timestamp': '2025-12-05 10:14:13.071031', '_unique_id': '6c5c8a98e9d8416fa72cb8d718a1d1d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.072 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.073 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd249cb85-051d-4e5d-a295-1ed4ea12307b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:14:13.073315', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '25c14114-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.129374913, 'message_signature': '86dc894b724c60f8ab8f58ccc64036463f23111b117f63cfe6f15ca68d3c3b14'}]}, 'timestamp': '2025-12-05 10:14:13.073767', '_unique_id': '9efacb96e471476c8a74cb15c4b85b5e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.075 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.075 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c642e9e-372b-4320-a23e-64a2d4816205', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.076017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25c1aa78-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'a6ac9d46878536d32f87b0c4f638183c137b5a0a92857761dc8cc4adea5044f5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.076017', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25c1bc0c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.158878377, 'message_signature': 'd079431699282ebd655b6b6bdae217c3f19bd95caf1d55321ce98981af9bae19'}]}, 'timestamp': '2025-12-05 10:14:13.076972', '_unique_id': '30669615673843c4868a46206a3ba7c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.079 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.079 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.079 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '10e0c803-9399-49e5-ba90-92e8f0cc5e02', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:14:13.079328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '25c22ca0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.137799331, 'message_signature': '9dafec30f7e51e3f27d317cc752b92af2ec5fb79d9040a5725dbabede8eb7c0e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:14:13.079328', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '25c23df8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.137799331, 'message_signature': 'b77272569937fc206d2b50399dda9c3e20448ee34847e7a14959bd6c06937353'}]}, 'timestamp': '2025-12-05 10:14:13.080214', '_unique_id': 'c6ececbb885048a9b633a6bd4c6790b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.081 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.082 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.082 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 18230000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0061747-b011-4d6f-9507-123d39e1d838', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18230000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:14:13.082752', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '25c2b1b6-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12768.228174328, 'message_signature': 'f4b5c46e10bf2e2268b2d1e4d28125fa9076b9c237fdbe284af1578ce052c21f'}]}, 'timestamp': '2025-12-05 10:14:13.083196', '_unique_id': 'a28867e663a94184a7936de0bdc0481c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:14:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:14:13.084 12 ERROR oslo_messaging.notify.messaging Dec 5 05:14:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 42 KiB/s wr, 85 op/s Dec 5 05:14:13 localhost nova_compute[280228]: 2025-12-05 10:14:13.802 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:14:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/.meta.tmp' Dec 5 05:14:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/.meta.tmp' to config b'/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/.meta' Dec 5 05:14:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "format": "json"}]: dispatch Dec 5 05:14:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:14:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:14:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e196 do_prune osdmap full prune enabled Dec 5 05:14:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e197 e197: 6 total, 6 up, 6 in Dec 5 05:14:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e197: 6 total, 6 up, 6 in Dec 5 05:14:14 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:14.443 261902 INFO neutron.agent.linux.ip_lib [None req-bd60f1f3-dcc9-4222-9443-193b12134c28 - - - - - -] Device tapd5ca1d9c-c4 cannot be used as it has no MAC address#033[00m Dec 5 05:14:14 localhost nova_compute[280228]: 2025-12-05 10:14:14.477 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:14 localhost kernel: device tapd5ca1d9c-c4 entered promiscuous mode Dec 5 05:14:14 localhost NetworkManager[5960]: [1764929654.4882] manager: (tapd5ca1d9c-c4): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Dec 5 05:14:14 localhost nova_compute[280228]: 2025-12-05 10:14:14.489 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:14 localhost systemd-udevd[321781]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:14:14 localhost ovn_controller[153000]: 2025-12-05T10:14:14Z|00346|binding|INFO|Claiming lport d5ca1d9c-c404-4af9-aded-f2842af813f3 for this chassis. Dec 5 05:14:14 localhost ovn_controller[153000]: 2025-12-05T10:14:14Z|00347|binding|INFO|d5ca1d9c-c404-4af9-aded-f2842af813f3: Claiming unknown Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost ovn_controller[153000]: 2025-12-05T10:14:14Z|00348|binding|INFO|Setting lport d5ca1d9c-c404-4af9-aded-f2842af813f3 ovn-installed in OVS Dec 5 05:14:14 localhost nova_compute[280228]: 2025-12-05 10:14:14.529 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost journal[228791]: ethtool ioctl error on tapd5ca1d9c-c4: No such device Dec 5 05:14:14 localhost nova_compute[280228]: 2025-12-05 10:14:14.566 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:14 localhost nova_compute[280228]: 2025-12-05 10:14:14.591 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:14 localhost ovn_controller[153000]: 2025-12-05T10:14:14Z|00349|binding|INFO|Setting lport d5ca1d9c-c404-4af9-aded-f2842af813f3 up in Southbound Dec 5 05:14:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:14.848 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f19c8f07954c5087b5a203e29474d9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8806726-72a8-4478-871d-7861df933a3c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ca1d9c-c404-4af9-aded-f2842af813f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:14.850 158820 INFO neutron.agent.ovn.metadata.agent [-] Port d5ca1d9c-c404-4af9-aded-f2842af813f3 in datapath 01da89ff-2b0f-4cb3-9804-ea0f51c9e15d bound to our chassis#033[00m Dec 5 05:14:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:14.852 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 01da89ff-2b0f-4cb3-9804-ea0f51c9e15d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:14:14 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:14.853 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[7426aedb-94c8-4940-9691-759ce73e2960]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:14:14 localhost nova_compute[280228]: 2025-12-05 10:14:14.963 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:14:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:14:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:14:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:14:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:14:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:14:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 27 KiB/s wr, 64 op/s Dec 5 05:14:15 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:14:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:14:15 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:14:15 localhost podman[321852]: Dec 5 05:14:15 localhost podman[321852]: 2025-12-05 10:14:15.558967516 +0000 UTC m=+0.087068037 container create 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 05:14:15 localhost systemd[1]: Started libpod-conmon-4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0.scope. Dec 5 05:14:15 localhost podman[321852]: 2025-12-05 10:14:15.507306843 +0000 UTC m=+0.035407394 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:14:15 localhost systemd[1]: Started libcrun container. Dec 5 05:14:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970deaf1823f1443f33ee979b3125e5fbf67485b242bdb9a3a9c3075cac5d698/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:14:15 localhost podman[321852]: 2025-12-05 10:14:15.632183697 +0000 UTC m=+0.160284228 container init 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:14:15 localhost podman[321852]: 2025-12-05 10:14:15.642057419 +0000 UTC m=+0.170157940 container start 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:15 localhost dnsmasq[321870]: started, version 2.85 cachesize 150 Dec 5 05:14:15 localhost dnsmasq[321870]: DNS service limited to local subnets Dec 5 05:14:15 localhost dnsmasq[321870]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:14:15 localhost dnsmasq[321870]: warning: no upstream servers configured Dec 5 05:14:15 localhost dnsmasq-dhcp[321870]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:14:15 localhost dnsmasq[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/addn_hosts - 0 addresses Dec 5 05:14:15 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/host Dec 5 05:14:15 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/opts Dec 5 05:14:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:15.707 261902 INFO neutron.agent.dhcp.agent [None req-bd60f1f3-dcc9-4222-9443-193b12134c28 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:14Z, description=, device_id=eb9bac54-a40e-4c48-95d6-5c6630f4f8a6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c629ec22-93dd-4752-89d5-a4128bf4ceef, ip_allocation=immediate, mac_address=fa:16:3e:37:45:5b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:10Z, description=, dns_domain=, id=01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--488795474, port_security_enabled=True, project_id=73f19c8f07954c5087b5a203e29474d9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5710, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2842, status=ACTIVE, subnets=['2b2376ea-9c4f-4ffc-bffd-0a11f3f0865d'], tags=[], tenant_id=73f19c8f07954c5087b5a203e29474d9, updated_at=2025-12-05T10:14:13Z, vlan_transparent=None, network_id=01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, port_security_enabled=False, project_id=73f19c8f07954c5087b5a203e29474d9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2869, status=DOWN, tags=[], tenant_id=73f19c8f07954c5087b5a203e29474d9, updated_at=2025-12-05T10:14:14Z on network 01da89ff-2b0f-4cb3-9804-ea0f51c9e15d#033[00m Dec 5 05:14:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:15.813 261902 INFO neutron.agent.dhcp.agent [None req-cefd629a-3d9e-48c9-a631-4af6282314d5 - - - - - -] DHCP configuration for ports {'ab23204d-66ad-4edf-b046-ffdea84f0f07'} is completed#033[00m Dec 5 05:14:15 localhost dnsmasq[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/addn_hosts - 1 addresses Dec 5 05:14:15 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/host Dec 5 05:14:15 localhost podman[321892]: 2025-12-05 10:14:15.919160513 +0000 UTC m=+0.062420132 container kill 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:14:15 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/opts Dec 5 05:14:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:15 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:15.927 261902 INFO neutron.agent.linux.ip_lib [None req-318aa5f9-639e-489d-a797-a871fd5f1d68 - - - - - -] Device tap68b75ae8-1e cannot be used as it has no MAC address#033[00m Dec 5 05:14:15 localhost nova_compute[280228]: 2025-12-05 10:14:15.977 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:15 localhost kernel: device tap68b75ae8-1e entered promiscuous mode Dec 5 05:14:15 localhost NetworkManager[5960]: [1764929655.9857] manager: (tap68b75ae8-1e): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Dec 5 05:14:15 localhost nova_compute[280228]: 2025-12-05 10:14:15.986 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:15 localhost ovn_controller[153000]: 2025-12-05T10:14:15Z|00350|binding|INFO|Claiming lport 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 for this chassis. Dec 5 05:14:15 localhost ovn_controller[153000]: 2025-12-05T10:14:15Z|00351|binding|INFO|68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004: Claiming unknown Dec 5 05:14:16 localhost ovn_controller[153000]: 2025-12-05T10:14:16Z|00352|binding|INFO|Setting lport 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 ovn-installed in OVS Dec 5 05:14:16 localhost nova_compute[280228]: 2025-12-05 10:14:16.037 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:16 localhost nova_compute[280228]: 2025-12-05 10:14:16.087 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:16 localhost nova_compute[280228]: 2025-12-05 10:14:16.126 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:16 localhost ovn_controller[153000]: 2025-12-05T10:14:16Z|00353|binding|INFO|Setting lport 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 up in Southbound Dec 5 05:14:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:16.204 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-2dcfef41-5a00-4307-993a-10c40cfe56e1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dcfef41-5a00-4307-993a-10c40cfe56e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f19c8f07954c5087b5a203e29474d9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6519931-d305-4f32-8bad-b3dbb724b320, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:16.206 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 in datapath 2dcfef41-5a00-4307-993a-10c40cfe56e1 bound to our chassis#033[00m Dec 5 05:14:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:16.208 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dcfef41-5a00-4307-993a-10c40cfe56e1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:14:16 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:16.209 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[e7486dd5-cf75-4a27-ae40-c39e06c36156]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:14:16 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:16.254 261902 INFO neutron.agent.dhcp.agent [None req-186d660b-bf3d-42ec-bc42-81141c1be0e2 - - - - - -] DHCP configuration for ports {'c629ec22-93dd-4752-89d5-a4128bf4ceef'} is completed#033[00m Dec 5 05:14:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:14:16 localhost nova_compute[280228]: 2025-12-05 10:14:16.712 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:16 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:16.721 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:14Z, description=, device_id=eb9bac54-a40e-4c48-95d6-5c6630f4f8a6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c629ec22-93dd-4752-89d5-a4128bf4ceef, ip_allocation=immediate, mac_address=fa:16:3e:37:45:5b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:10Z, description=, dns_domain=, id=01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--488795474, port_security_enabled=True, project_id=73f19c8f07954c5087b5a203e29474d9, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5710, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2842, status=ACTIVE, subnets=['2b2376ea-9c4f-4ffc-bffd-0a11f3f0865d'], tags=[], tenant_id=73f19c8f07954c5087b5a203e29474d9, updated_at=2025-12-05T10:14:13Z, vlan_transparent=None, network_id=01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, port_security_enabled=False, project_id=73f19c8f07954c5087b5a203e29474d9, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2869, status=DOWN, tags=[], tenant_id=73f19c8f07954c5087b5a203e29474d9, updated_at=2025-12-05T10:14:14Z on network 01da89ff-2b0f-4cb3-9804-ea0f51c9e15d#033[00m Dec 5 05:14:16 localhost dnsmasq[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/addn_hosts - 1 addresses Dec 5 05:14:16 localhost podman[321974]: 2025-12-05 10:14:16.988484104 +0000 UTC m=+0.118869131 container kill 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 5 05:14:16 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/host Dec 5 05:14:16 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/opts Dec 5 05:14:17 localhost podman[322002]: Dec 5 05:14:17 localhost podman[322002]: 2025-12-05 10:14:17.055764164 +0000 UTC m=+0.084537909 container create 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 5 05:14:17 localhost systemd[1]: Started libpod-conmon-9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355.scope. Dec 5 05:14:17 localhost podman[322002]: 2025-12-05 10:14:17.012031135 +0000 UTC m=+0.040804890 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:14:17 localhost systemd[1]: Started libcrun container. Dec 5 05:14:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/973620af3a89ed8d9bc82848b1091256efc0926c014a9efcffbf2bb177b3d3bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:14:17 localhost podman[322002]: 2025-12-05 10:14:17.141069855 +0000 UTC m=+0.169843610 container init 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:14:17 localhost podman[322002]: 2025-12-05 10:14:17.148217605 +0000 UTC m=+0.176991360 container start 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:14:17 localhost dnsmasq[322030]: started, version 2.85 cachesize 150 Dec 5 05:14:17 localhost dnsmasq[322030]: DNS service limited to local subnets Dec 5 05:14:17 localhost dnsmasq[322030]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:14:17 localhost dnsmasq[322030]: warning: no upstream servers configured Dec 5 05:14:17 localhost dnsmasq-dhcp[322030]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 5 05:14:17 localhost dnsmasq[322030]: read /var/lib/neutron/dhcp/2dcfef41-5a00-4307-993a-10c40cfe56e1/addn_hosts - 0 addresses Dec 5 05:14:17 localhost dnsmasq-dhcp[322030]: read /var/lib/neutron/dhcp/2dcfef41-5a00-4307-993a-10c40cfe56e1/host Dec 5 05:14:17 localhost dnsmasq-dhcp[322030]: read /var/lib/neutron/dhcp/2dcfef41-5a00-4307-993a-10c40cfe56e1/opts Dec 5 05:14:17 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:17.207 261902 INFO neutron.agent.dhcp.agent [None req-1e6c9ae3-f0d5-4c77-93a3-f787afa9aef1 - - - - - -] DHCP configuration for ports {'c629ec22-93dd-4752-89d5-a4128bf4ceef'} is completed#033[00m Dec 5 05:14:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 36 KiB/s wr, 103 op/s Dec 5 05:14:17 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:17.381 261902 INFO neutron.agent.dhcp.agent [None req-9aceeede-a4c4-41ca-b3f2-021cb6c23a92 - - - - - -] DHCP configuration for ports {'8466aa01-f149-405a-ba17-dd6cba1934e2'} is completed#033[00m Dec 5 05:14:17 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve49", "tenant_id": "fb88a523f48e4990b7617051dc3491c9", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:14:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, tenant_id:fb88a523f48e4990b7617051dc3491c9, vol_name:cephfs) < "" Dec 5 05:14:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Dec 5 05:14:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 5 05:14:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID eve49 with tenant fb88a523f48e4990b7617051dc3491c9 Dec 5 05:14:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:14:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:14:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:14:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, tenant_id:fb88a523f48e4990b7617051dc3491c9, vol_name:cephfs) < "" Dec 5 05:14:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 5 05:14:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:14:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:14:18 localhost nova_compute[280228]: 2025-12-05 10:14:18.505 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:18 localhost nova_compute[280228]: 2025-12-05 10:14:18.844 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:18 localhost dnsmasq[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/addn_hosts - 0 addresses Dec 5 05:14:18 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/host Dec 5 05:14:18 localhost podman[322048]: 2025-12-05 10:14:18.991346048 +0000 UTC m=+0.041018508 container kill 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:14:18 localhost dnsmasq-dhcp[321870]: read /var/lib/neutron/dhcp/01da89ff-2b0f-4cb3-9804-ea0f51c9e15d/opts Dec 5 05:14:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:14:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:14:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:14:19 localhost systemd[1]: tmp-crun.itlNRi.mount: Deactivated successfully. Dec 5 05:14:19 localhost podman[322062]: 2025-12-05 10:14:19.125824315 +0000 UTC m=+0.097786675 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:14:19 localhost podman[322065]: 2025-12-05 10:14:19.138174813 +0000 UTC m=+0.105268204 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:14:19 localhost podman[322065]: 2025-12-05 10:14:19.152612245 +0000 UTC m=+0.119705576 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:14:19 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:14:19 localhost podman[322062]: 2025-12-05 10:14:19.165024425 +0000 UTC m=+0.136986735 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:14:19 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:14:19 localhost podman[322064]: 2025-12-05 10:14:19.222448793 +0000 UTC m=+0.194751063 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:14:19 localhost podman[322064]: 2025-12-05 10:14:19.228703645 +0000 UTC m=+0.201005885 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 5 05:14:19 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:14:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 9.0 KiB/s wr, 36 op/s Dec 5 05:14:19 localhost podman[239519]: time="2025-12-05T10:14:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:14:19 localhost podman[239519]: @ - - [05/Dec/2025:10:14:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159745 "" "Go-http-client/1.1" Dec 5 05:14:19 localhost podman[239519]: @ - - [05/Dec/2025:10:14:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20219 "" "Go-http-client/1.1" Dec 5 05:14:19 localhost nova_compute[280228]: 2025-12-05 10:14:19.968 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:19 localhost systemd[1]: tmp-crun.jzIjhK.mount: Deactivated successfully. Dec 5 05:14:20 localhost nova_compute[280228]: 2025-12-05 10:14:20.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e197 do_prune osdmap full prune enabled Dec 5 05:14:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e198 e198: 6 total, 6 up, 6 in Dec 5 05:14:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e198: 6 total, 6 up, 6 in Dec 5 05:14:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 34 KiB/s wr, 58 op/s Dec 5 05:14:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 30 KiB/s wr, 52 op/s Dec 5 05:14:23 localhost nova_compute[280228]: 2025-12-05 10:14:23.684 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:14:23 localhost nova_compute[280228]: 2025-12-05 10:14:23.685 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:14:23 localhost nova_compute[280228]: 2025-12-05 10:14:23.685 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:14:23 localhost nova_compute[280228]: 2025-12-05 10:14:23.686 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:14:23 localhost nova_compute[280228]: 2025-12-05 10:14:23.686 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:14:23 localhost nova_compute[280228]: 2025-12-05 10:14:23.871 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:14:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/163726682' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:14:24 localhost nova_compute[280228]: 2025-12-05 10:14:24.145 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:14:25 localhost nova_compute[280228]: 2025-12-05 10:14:25.017 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 27 KiB/s wr, 47 op/s Dec 5 05:14:25 localhost nova_compute[280228]: 2025-12-05 10:14:25.883 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:14:25 localhost nova_compute[280228]: 2025-12-05 10:14:25.884 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:14:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.064 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.067 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11149MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.067 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.068 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.133 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:26 localhost kernel: device tapd5ca1d9c-c4 left promiscuous mode Dec 5 05:14:26 localhost ovn_controller[153000]: 2025-12-05T10:14:26Z|00354|binding|INFO|Releasing lport d5ca1d9c-c404-4af9-aded-f2842af813f3 from this chassis (sb_readonly=0) Dec 5 05:14:26 localhost ovn_controller[153000]: 2025-12-05T10:14:26Z|00355|binding|INFO|Setting lport d5ca1d9c-c404-4af9-aded-f2842af813f3 down in Southbound Dec 5 05:14:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:26.143 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f19c8f07954c5087b5a203e29474d9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8806726-72a8-4478-871d-7861df933a3c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d5ca1d9c-c404-4af9-aded-f2842af813f3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:26.145 158820 INFO neutron.agent.ovn.metadata.agent [-] Port d5ca1d9c-c404-4af9-aded-f2842af813f3 in datapath 01da89ff-2b0f-4cb3-9804-ea0f51c9e15d unbound from our chassis#033[00m Dec 5 05:14:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:26.147 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 01da89ff-2b0f-4cb3-9804-ea0f51c9e15d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:14:26 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:26.148 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a0465b-1f90-478c-b058-0f9707cddd13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.162 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve48", "tenant_id": "fb88a523f48e4990b7617051dc3491c9", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:14:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, tenant_id:fb88a523f48e4990b7617051dc3491c9, vol_name:cephfs) < "" Dec 5 05:14:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Dec 5 05:14:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 5 05:14:26 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID eve48 with tenant fb88a523f48e4990b7617051dc3491c9 Dec 5 05:14:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:14:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.531 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.532 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.532 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:14:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:14:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, tenant_id:fb88a523f48e4990b7617051dc3491c9, vol_name:cephfs) < "" Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.735 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.935 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.936 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 05:14:26 localhost nova_compute[280228]: 2025-12-05 10:14:26.963 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.012 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.057 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:14:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:14:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:14:27 localhost openstack_network_exporter[241668]: ERROR 10:14:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:14:27 localhost openstack_network_exporter[241668]: ERROR 10:14:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:14:27 localhost openstack_network_exporter[241668]: ERROR 10:14:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:14:27 localhost openstack_network_exporter[241668]: ERROR 10:14:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:14:27 localhost openstack_network_exporter[241668]: Dec 5 05:14:27 localhost openstack_network_exporter[241668]: ERROR 10:14:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:14:27 localhost openstack_network_exporter[241668]: Dec 5 05:14:27 localhost systemd[1]: tmp-crun.CsJDt4.mount: Deactivated successfully. Dec 5 05:14:27 localhost systemd[1]: tmp-crun.9DZNpm.mount: Deactivated successfully. Dec 5 05:14:27 localhost podman[322152]: 2025-12-05 10:14:27.239011474 +0000 UTC m=+0.117496629 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:14:27 localhost podman[322151]: 2025-12-05 10:14:27.245352527 +0000 UTC m=+0.131437965 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:14:27 localhost podman[322152]: 2025-12-05 10:14:27.255794737 +0000 UTC m=+0.134279912 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:14:27 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:14:27 localhost podman[322151]: 2025-12-05 10:14:27.282779764 +0000 UTC m=+0.168865162 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:14:27 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:14:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 23 KiB/s wr, 19 op/s Dec 5 05:14:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:14:27 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/166280889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.480 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.486 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.509 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.510 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.510 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.511 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.511 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 05:14:27 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 5 05:14:27 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:14:27 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.532 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.532 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:27 localhost nova_compute[280228]: 2025-12-05 10:14:27.532 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 05:14:28 localhost nova_compute[280228]: 2025-12-05 10:14:28.875 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 196 MiB data, 966 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 23 KiB/s wr, 19 op/s Dec 5 05:14:29 localhost nova_compute[280228]: 2025-12-05 10:14:29.544 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:29.542 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:29.543 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.019 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:30 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve48", "format": "json"}]: dispatch Dec 5 05:14:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Dec 5 05:14:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 5 05:14:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Dec 5 05:14:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 5 05:14:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Dec 5 05:14:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:30 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve48", "format": "json"}]: dispatch Dec 5 05:14:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:30 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6 Dec 5 05:14:30 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:14:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:14:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 5 05:14:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 5 05:14:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.642 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.643 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.643 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:14:30 localhost nova_compute[280228]: 2025-12-05 10:14:30.643 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:14:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 252 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 5.4 MiB/s wr, 64 op/s Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.668 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.686 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.686 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.687 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.687 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.687 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.687 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.688 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.688 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:14:31 localhost nova_compute[280228]: 2025-12-05 10:14:31.688 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:14:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e198 do_prune osdmap full prune enabled Dec 5 05:14:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e199 e199: 6 total, 6 up, 6 in Dec 5 05:14:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e199: 6 total, 6 up, 6 in Dec 5 05:14:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:32.547 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:14:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e199 do_prune osdmap full prune enabled Dec 5 05:14:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e200 e200: 6 total, 6 up, 6 in Dec 5 05:14:33 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e200: 6 total, 6 up, 6 in Dec 5 05:14:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 252 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 7.0 MiB/s wr, 63 op/s Dec 5 05:14:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve47", "tenant_id": "fb88a523f48e4990b7617051dc3491c9", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:14:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, tenant_id:fb88a523f48e4990b7617051dc3491c9, vol_name:cephfs) < "" Dec 5 05:14:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Dec 5 05:14:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 5 05:14:33 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID eve47 with tenant fb88a523f48e4990b7617051dc3491c9 Dec 5 05:14:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:14:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:14:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:14:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 5 05:14:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:14:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd6e9824-a806-4dd3-a108-b909edbc40c4", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:14:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, tenant_id:fb88a523f48e4990b7617051dc3491c9, vol_name:cephfs) < "" Dec 5 05:14:33 localhost nova_compute[280228]: 2025-12-05 10:14:33.880 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:34 localhost dnsmasq[322030]: exiting on receipt of SIGTERM Dec 5 05:14:34 localhost podman[322237]: 2025-12-05 10:14:34.144317048 +0000 UTC m=+0.051728515 container kill 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 05:14:34 localhost systemd[1]: libpod-9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355.scope: Deactivated successfully. Dec 5 05:14:34 localhost podman[322251]: 2025-12-05 10:14:34.207517864 +0000 UTC m=+0.048972691 container died 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:14:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355-userdata-shm.mount: Deactivated successfully. Dec 5 05:14:34 localhost podman[322251]: 2025-12-05 10:14:34.297116306 +0000 UTC m=+0.138571143 container cleanup 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:14:34 localhost systemd[1]: libpod-conmon-9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355.scope: Deactivated successfully. Dec 5 05:14:34 localhost podman[322252]: 2025-12-05 10:14:34.319817342 +0000 UTC m=+0.155051889 container remove 9fd1a452ccbc085cc2a019c559974c62de73ea36ac3cc05b7b5ec8e512957355 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2dcfef41-5a00-4307-993a-10c40cfe56e1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:14:34 localhost ovn_controller[153000]: 2025-12-05T10:14:34Z|00356|binding|INFO|Releasing lport 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 from this chassis (sb_readonly=0) Dec 5 05:14:34 localhost nova_compute[280228]: 2025-12-05 10:14:34.330 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:34 localhost ovn_controller[153000]: 2025-12-05T10:14:34Z|00357|binding|INFO|Setting lport 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 down in Southbound Dec 5 05:14:34 localhost kernel: device tap68b75ae8-1e left promiscuous mode Dec 5 05:14:34 localhost nova_compute[280228]: 2025-12-05 10:14:34.347 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:34.355 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-2dcfef41-5a00-4307-993a-10c40cfe56e1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dcfef41-5a00-4307-993a-10c40cfe56e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '73f19c8f07954c5087b5a203e29474d9', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6519931-d305-4f32-8bad-b3dbb724b320, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:34.356 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 68b75ae8-1e3b-4bbf-8b2d-fbf9ec84c004 in datapath 2dcfef41-5a00-4307-993a-10c40cfe56e1 unbound from our chassis#033[00m Dec 5 05:14:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:34.357 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2dcfef41-5a00-4307-993a-10c40cfe56e1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:14:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:34.358 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[e649ce1b-0fb0-4432-898c-3b5054340abc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:14:35 localhost nova_compute[280228]: 2025-12-05 10:14:35.021 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:35 localhost systemd[1]: var-lib-containers-storage-overlay-973620af3a89ed8d9bc82848b1091256efc0926c014a9efcffbf2bb177b3d3bb-merged.mount: Deactivated successfully. Dec 5 05:14:35 localhost systemd[1]: run-netns-qdhcp\x2d2dcfef41\x2d5a00\x2d4307\x2d993a\x2d10c40cfe56e1.mount: Deactivated successfully. Dec 5 05:14:35 localhost dnsmasq[321870]: exiting on receipt of SIGTERM Dec 5 05:14:35 localhost podman[322296]: 2025-12-05 10:14:35.153132056 +0000 UTC m=+0.711730923 container kill 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:35 localhost systemd[1]: libpod-4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0.scope: Deactivated successfully. Dec 5 05:14:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:35.157 261902 INFO neutron.agent.dhcp.agent [None req-b252ed72-988f-467d-9d10-34c2734497b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:14:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:35.157 261902 INFO neutron.agent.dhcp.agent [None req-b252ed72-988f-467d-9d10-34c2734497b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:14:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:35.158 261902 INFO neutron.agent.dhcp.agent [None req-b252ed72-988f-467d-9d10-34c2734497b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:14:35 localhost podman[322310]: 2025-12-05 10:14:35.22381553 +0000 UTC m=+0.051704864 container died 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:14:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0-userdata-shm.mount: Deactivated successfully. Dec 5 05:14:35 localhost podman[322310]: 2025-12-05 10:14:35.256214912 +0000 UTC m=+0.084104216 container cleanup 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 5 05:14:35 localhost systemd[1]: libpod-conmon-4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0.scope: Deactivated successfully. Dec 5 05:14:35 localhost podman[322312]: 2025-12-05 10:14:35.282543108 +0000 UTC m=+0.101761986 container remove 4ec9d19ce5672b515ef4f715d2b6d75f5c04cb000b227273e35f202e9c7cd8a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-01da89ff-2b0f-4cb3-9804-ea0f51c9e15d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:14:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 252 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 7.0 MiB/s wr, 62 op/s Dec 5 05:14:35 localhost ovn_controller[153000]: 2025-12-05T10:14:35Z|00358|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:14:35 localhost nova_compute[280228]: 2025-12-05 10:14:35.482 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:35.553 261902 INFO neutron.agent.dhcp.agent [None req-3b832cb2-82c4-4a6a-b618-ef27a7afd3fa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:14:35 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:35.768 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:14:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:36 localhost systemd[1]: var-lib-containers-storage-overlay-970deaf1823f1443f33ee979b3125e5fbf67485b242bdb9a3a9c3075cac5d698-merged.mount: Deactivated successfully. Dec 5 05:14:36 localhost systemd[1]: run-netns-qdhcp\x2d01da89ff\x2d2b0f\x2d4cb3\x2d9804\x2dea0f51c9e15d.mount: Deactivated successfully. Dec 5 05:14:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e200 do_prune osdmap full prune enabled Dec 5 05:14:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e201 e201: 6 total, 6 up, 6 in Dec 5 05:14:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e201: 6 total, 6 up, 6 in Dec 5 05:14:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 349 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 16 MiB/s wr, 105 op/s Dec 5 05:14:37 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve47", "format": "json"}]: dispatch Dec 5 05:14:37 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Dec 5 05:14:37 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 5 05:14:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Dec 5 05:14:37 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 5 05:14:37 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Dec 5 05:14:37 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:37 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve47", "format": "json"}]: dispatch Dec 5 05:14:37 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:37 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6 Dec 5 05:14:37 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:14:37 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 5 05:14:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 5 05:14:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.320505) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678320579, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 257, "total_data_size": 1113734, "memory_usage": 1145200, "flush_reason": "Manual Compaction"} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678328735, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1090279, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32438, "largest_seqno": 33698, "table_properties": {"data_size": 1084612, "index_size": 2945, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13886, "raw_average_key_size": 21, "raw_value_size": 1072614, "raw_average_value_size": 1652, "num_data_blocks": 128, "num_entries": 649, "num_filter_entries": 649, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929610, "oldest_key_time": 1764929610, "file_creation_time": 1764929678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 8352 microseconds, and 3217 cpu microseconds. Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.328864) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1090279 bytes OK Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.328885) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.331072) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.331087) EVENT_LOG_v1 {"time_micros": 1764929678331083, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.331106) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1107782, prev total WAL file size 1107782, number of live WAL files 2. Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.331924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(1064KB)], [57(16MB)] Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678332000, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18048961, "oldest_snapshot_seqno": -1} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 13063 keys, 16758808 bytes, temperature: kUnknown Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678408902, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16758808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16685749, "index_size": 39294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32709, "raw_key_size": 350023, "raw_average_key_size": 26, "raw_value_size": 16464945, "raw_average_value_size": 1260, "num_data_blocks": 1471, "num_entries": 13063, "num_filter_entries": 13063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.409321) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16758808 bytes Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.413781) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.4 rd, 217.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.2 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(31.9) write-amplify(15.4) OK, records in: 13597, records dropped: 534 output_compression: NoCompression Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.413798) EVENT_LOG_v1 {"time_micros": 1764929678413789, "job": 34, "event": "compaction_finished", "compaction_time_micros": 76998, "compaction_time_cpu_micros": 31163, "output_level": 6, "num_output_files": 1, "total_output_size": 16758808, "num_input_records": 13597, "num_output_records": 13063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678414040, "job": 34, "event": "table_file_deletion", "file_number": 59} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929678416213, "job": 34, "event": "table_file_deletion", "file_number": 57} Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.331798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.416474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.416483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.416487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.416490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:14:38 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:14:38.416493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:14:38 localhost nova_compute[280228]: 2025-12-05 10:14:38.909 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:14:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:14:39 localhost systemd[1]: tmp-crun.ezHuXz.mount: Deactivated successfully. Dec 5 05:14:39 localhost podman[322345]: 2025-12-05 10:14:39.217753957 +0000 UTC m=+0.103962934 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350) Dec 5 05:14:39 localhost podman[322344]: 2025-12-05 10:14:39.269086988 +0000 UTC m=+0.157669428 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 5 05:14:39 localhost podman[322344]: 2025-12-05 10:14:39.281746906 +0000 UTC m=+0.170329346 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:14:39 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:14:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e201 do_prune osdmap full prune enabled Dec 5 05:14:39 localhost podman[322345]: 2025-12-05 10:14:39.353768872 +0000 UTC m=+0.239977849 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, vcs-type=git, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container) Dec 5 05:14:39 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:14:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e202 e202: 6 total, 6 up, 6 in Dec 5 05:14:39 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e202: 6 total, 6 up, 6 in Dec 5 05:14:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 349 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 15 MiB/s wr, 94 op/s Dec 5 05:14:40 localhost nova_compute[280228]: 2025-12-05 10:14:40.024 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 477 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 111 KiB/s rd, 28 MiB/s wr, 163 op/s Dec 5 05:14:43 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:43.252 261902 INFO neutron.agent.linux.ip_lib [None req-5e375226-9b58-4a98-a305-e6ccefd16357 - - - - - -] Device tapba18f1c2-a0 cannot be used as it has no MAC address#033[00m Dec 5 05:14:43 localhost nova_compute[280228]: 2025-12-05 10:14:43.283 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:43 localhost kernel: device tapba18f1c2-a0 entered promiscuous mode Dec 5 05:14:43 localhost NetworkManager[5960]: [1764929683.2913] manager: (tapba18f1c2-a0): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Dec 5 05:14:43 localhost nova_compute[280228]: 2025-12-05 10:14:43.291 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:43 localhost ovn_controller[153000]: 2025-12-05T10:14:43Z|00359|binding|INFO|Claiming lport ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 for this chassis. Dec 5 05:14:43 localhost ovn_controller[153000]: 2025-12-05T10:14:43Z|00360|binding|INFO|ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2: Claiming unknown Dec 5 05:14:43 localhost systemd-udevd[322392]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:14:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:43.305 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3228b8fc-d0b0-4559-bc59-54be1da8ae6a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:43.306 158820 INFO neutron.agent.ovn.metadata.agent [-] Port ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 in datapath a0a9b67e-778c-4f9f-a1ce-1cf98d562a33 bound to our chassis#033[00m Dec 5 05:14:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:43.308 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6186c7ad-de5b-4cc8-9464-4996139954c7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:14:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:43.308 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:14:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:43.308 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[59554e89-4b1d-481e-aff6-b6051c2ad75e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost ovn_controller[153000]: 2025-12-05T10:14:43Z|00361|binding|INFO|Setting lport ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 up in Southbound Dec 5 05:14:43 localhost ovn_controller[153000]: 2025-12-05T10:14:43Z|00362|binding|INFO|Setting lport ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 ovn-installed in OVS Dec 5 05:14:43 localhost nova_compute[280228]: 2025-12-05 10:14:43.331 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost journal[228791]: ethtool ioctl error on tapba18f1c2-a0: No such device Dec 5 05:14:43 localhost nova_compute[280228]: 2025-12-05 10:14:43.378 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 485 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 114 KiB/s rd, 29 MiB/s wr, 168 op/s Dec 5 05:14:43 localhost nova_compute[280228]: 2025-12-05 10:14:43.406 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:43 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve49", "format": "json"}]: dispatch Dec 5 05:14:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:43 localhost nova_compute[280228]: 2025-12-05 10:14:43.939 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Dec 5 05:14:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 5 05:14:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Dec 5 05:14:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 5 05:14:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "auth_id": "eve49", "format": "json"}]: dispatch Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4/67f2c47b-25a6-4653-ac8b-70d9959772c6 Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "format": "json"}]: dispatch Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:14:44 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd6e9824-a806-4dd3-a108-b909edbc40c4' of type subvolume Dec 5 05:14:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:14:44.230+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cd6e9824-a806-4dd3-a108-b909edbc40c4' of type subvolume Dec 5 05:14:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd6e9824-a806-4dd3-a108-b909edbc40c4", "force": true, "format": "json"}]: dispatch Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cd6e9824-a806-4dd3-a108-b909edbc40c4'' moved to trashcan Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:14:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cd6e9824-a806-4dd3-a108-b909edbc40c4, vol_name:cephfs) < "" Dec 5 05:14:44 localhost podman[322464]: Dec 5 05:14:44 localhost podman[322464]: 2025-12-05 10:14:44.306679469 +0000 UTC m=+0.074623486 container create 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 5 05:14:44 localhost systemd[1]: Started libpod-conmon-9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b.scope. Dec 5 05:14:44 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 5 05:14:44 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 5 05:14:44 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Dec 5 05:14:44 localhost systemd[1]: Started libcrun container. Dec 5 05:14:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a3d74553558f0484f1186800269b771d273e8e11e7b6b03d2fce208a2e6950/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:14:44 localhost podman[322464]: 2025-12-05 10:14:44.360797426 +0000 UTC m=+0.128741453 container init 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:44 localhost podman[322464]: 2025-12-05 10:14:44.268722587 +0000 UTC m=+0.036666654 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:14:44 localhost podman[322464]: 2025-12-05 10:14:44.371514754 +0000 UTC m=+0.139458791 container start 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:14:44 localhost dnsmasq[322483]: started, version 2.85 cachesize 150 Dec 5 05:14:44 localhost dnsmasq[322483]: DNS service limited to local subnets Dec 5 05:14:44 localhost dnsmasq[322483]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:14:44 localhost dnsmasq[322483]: warning: no upstream servers configured Dec 5 05:14:44 localhost dnsmasq-dhcp[322483]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:14:44 localhost dnsmasq[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/addn_hosts - 0 addresses Dec 5 05:14:44 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/host Dec 5 05:14:44 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/opts Dec 5 05:14:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:44.423 261902 INFO neutron.agent.dhcp.agent [None req-92736806-d99d-4aaa-a6e1-2240f6a72ed2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:42Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=851505bd-de47-4b2b-9ad4-bca5c7dee724, ip_allocation=immediate, mac_address=fa:16:3e:65:6e:e0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:35Z, description=, dns_domain=, id=a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-504699659, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32329, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2929, status=ACTIVE, subnets=['915cd0f1-3c69-4e44-a1c6-d8bd55b28b75'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:36Z, vlan_transparent=None, network_id=a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2945, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:42Z on network a0a9b67e-778c-4f9f-a1ce-1cf98d562a33#033[00m Dec 5 05:14:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:44.512 261902 INFO neutron.agent.dhcp.agent [None req-6dcec9d5-262e-4320-be39-8b58b6a87f29 - - - - - -] DHCP configuration for ports {'5b5b4841-4bb6-47e5-a843-0695de905f94'} is completed#033[00m Dec 5 05:14:44 localhost dnsmasq[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/addn_hosts - 1 addresses Dec 5 05:14:44 localhost podman[322501]: 2025-12-05 10:14:44.657911963 +0000 UTC m=+0.059259955 container kill 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:44 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/host Dec 5 05:14:44 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/opts Dec 5 05:14:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:44.798 261902 INFO neutron.agent.dhcp.agent [None req-4118d451-7a12-4c2d-a592-b6e80b1458bd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:42Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=851505bd-de47-4b2b-9ad4-bca5c7dee724, ip_allocation=immediate, mac_address=fa:16:3e:65:6e:e0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:35Z, description=, dns_domain=, id=a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-504699659, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32329, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2929, status=ACTIVE, subnets=['915cd0f1-3c69-4e44-a1c6-d8bd55b28b75'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:36Z, vlan_transparent=None, network_id=a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2945, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:42Z on network a0a9b67e-778c-4f9f-a1ce-1cf98d562a33#033[00m Dec 5 05:14:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:14:44 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3381983602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:14:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:14:44 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3381983602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:14:44 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:44.875 261902 INFO neutron.agent.dhcp.agent [None req-5f43be97-a606-4baf-8d62-e4f3d2b65899 - - - - - -] DHCP configuration for ports {'851505bd-de47-4b2b-9ad4-bca5c7dee724'} is completed#033[00m Dec 5 05:14:44 localhost nova_compute[280228]: 2025-12-05 10:14:44.877 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:45 localhost dnsmasq[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/addn_hosts - 1 addresses Dec 5 05:14:45 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/host Dec 5 05:14:45 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/opts Dec 5 05:14:45 localhost podman[322539]: 2025-12-05 10:14:45.011554571 +0000 UTC m=+0.041628456 container kill 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:45 localhost nova_compute[280228]: 2025-12-05 10:14:45.068 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:14:45 Dec 5 05:14:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:14:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:14:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['images', 'vms', 'backups', 'volumes', 'manila_metadata', 'manila_data', '.mgr'] Dec 5 05:14:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:14:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:14:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:14:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:14:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:14:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:14:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:14:45 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:45.368 261902 INFO neutron.agent.dhcp.agent [None req-25ede391-89ba-46c6-adff-2001cfd56001 - - - - - -] DHCP configuration for ports {'851505bd-de47-4b2b-9ad4-bca5c7dee724'} is completed#033[00m Dec 5 05:14:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 485 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 98 KiB/s rd, 21 MiB/s wr, 141 op/s Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014877447131490787 of space, bias 1.0, pg target 0.2970530277254327 quantized to 32 (current 32) Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.024586546738321235 of space, bias 1.0, pg target 4.909113832084807 quantized to 32 (current 32) Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.316295016750419e-05 quantized to 32 (current 32) Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021265180067001676 quantized to 32 (current 32) Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:14:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00035051196375395497 of space, bias 4.0, pg target 0.2733993317280849 quantized to 16 (current 16) Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:14:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:14:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e202 do_prune osdmap full prune enabled Dec 5 05:14:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e203 e203: 6 total, 6 up, 6 in Dec 5 05:14:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e203: 6 total, 6 up, 6 in Dec 5 05:14:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:46.425 261902 INFO neutron.agent.linux.ip_lib [None req-cf78a7ca-cfce-45e7-a82b-6519064b9fd4 - - - - - -] Device tap4bcac450-74 cannot be used as it has no MAC address#033[00m Dec 5 05:14:46 localhost nova_compute[280228]: 2025-12-05 10:14:46.463 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:46 localhost kernel: device tap4bcac450-74 entered promiscuous mode Dec 5 05:14:46 localhost NetworkManager[5960]: [1764929686.4716] manager: (tap4bcac450-74): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Dec 5 05:14:46 localhost nova_compute[280228]: 2025-12-05 10:14:46.471 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:46 localhost ovn_controller[153000]: 2025-12-05T10:14:46Z|00363|binding|INFO|Claiming lport 4bcac450-746f-4219-a816-5b956f8b429b for this chassis. Dec 5 05:14:46 localhost ovn_controller[153000]: 2025-12-05T10:14:46Z|00364|binding|INFO|4bcac450-746f-4219-a816-5b956f8b429b: Claiming unknown Dec 5 05:14:46 localhost systemd-udevd[322569]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:14:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:46.486 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9189846b-a1cc-47d6-8ff6-6e567fa94818, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4bcac450-746f-4219-a816-5b956f8b429b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:14:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:46.488 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 4bcac450-746f-4219-a816-5b956f8b429b in datapath 2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a bound to our chassis#033[00m Dec 5 05:14:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:46.490 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:14:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:14:46.490 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a8c11c-129e-4dfc-9994-09dd06af9ac1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost ovn_controller[153000]: 2025-12-05T10:14:46Z|00365|binding|INFO|Setting lport 4bcac450-746f-4219-a816-5b956f8b429b ovn-installed in OVS Dec 5 05:14:46 localhost ovn_controller[153000]: 2025-12-05T10:14:46Z|00366|binding|INFO|Setting lport 4bcac450-746f-4219-a816-5b956f8b429b up in Southbound Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost nova_compute[280228]: 2025-12-05 10:14:46.518 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:46 localhost nova_compute[280228]: 2025-12-05 10:14:46.520 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost journal[228791]: ethtool ioctl error on tap4bcac450-74: No such device Dec 5 05:14:46 localhost nova_compute[280228]: 2025-12-05 10:14:46.548 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:46 localhost nova_compute[280228]: 2025-12-05 10:14:46.573 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:14:46 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1223691897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:14:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:14:46 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1223691897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:14:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 613 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 107 KiB/s rd, 33 MiB/s wr, 157 op/s Dec 5 05:14:47 localhost podman[322638]: Dec 5 05:14:47 localhost podman[322638]: 2025-12-05 10:14:47.426434699 +0000 UTC m=+0.103311534 container create 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:14:47 localhost systemd[1]: Started libpod-conmon-143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890.scope. Dec 5 05:14:47 localhost podman[322638]: 2025-12-05 10:14:47.380869444 +0000 UTC m=+0.057746309 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:14:47 localhost systemd[1]: Started libcrun container. Dec 5 05:14:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69828b63d2a2c22183305f1052156dd3bc8accdb6ae6ebf188f5e3fd36e1cd97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:14:47 localhost podman[322638]: 2025-12-05 10:14:47.502792207 +0000 UTC m=+0.179669052 container init 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:14:47 localhost podman[322638]: 2025-12-05 10:14:47.516048803 +0000 UTC m=+0.192925618 container start 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:14:47 localhost dnsmasq[322656]: started, version 2.85 cachesize 150 Dec 5 05:14:47 localhost dnsmasq[322656]: DNS service limited to local subnets Dec 5 05:14:47 localhost dnsmasq[322656]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:14:47 localhost dnsmasq[322656]: warning: no upstream servers configured Dec 5 05:14:47 localhost dnsmasq-dhcp[322656]: DHCP, static leases only on 10.101.0.0, lease time 1d Dec 5 05:14:47 localhost dnsmasq[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/addn_hosts - 0 addresses Dec 5 05:14:47 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/host Dec 5 05:14:47 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/opts Dec 5 05:14:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:47.662 261902 INFO neutron.agent.dhcp.agent [None req-4d9e6d14-a0ca-48d3-95c8-0ec30620f8b6 - - - - - -] DHCP configuration for ports {'6a7690fe-25db-42ac-b713-070f5ea37639'} is completed#033[00m Dec 5 05:14:48 localhost systemd[1]: tmp-crun.boUeFP.mount: Deactivated successfully. Dec 5 05:14:48 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:48.676 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:48Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ed3be4f5-ad82-4c79-b70a-9b5e95332811, ip_allocation=immediate, mac_address=fa:16:3e:31:27:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:44Z, description=, dns_domain=, id=2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1375099133, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62925, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2951, status=ACTIVE, subnets=['aea8373e-41f2-4975-8142-c71f1d7f37f4'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:45Z, vlan_transparent=None, network_id=2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2966, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:48Z on network 2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a#033[00m Dec 5 05:14:48 localhost nova_compute[280228]: 2025-12-05 10:14:48.988 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:48 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:48.991 2 INFO neutron.agent.securitygroups_rpc [None req-b4eb535c-8204-4d25-b0f4-40572269384f 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['3b13fc28-6bef-461a-b25f-c885640f870a']#033[00m Dec 5 05:14:48 localhost systemd[1]: tmp-crun.NQ0b0U.mount: Deactivated successfully. Dec 5 05:14:48 localhost dnsmasq[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/addn_hosts - 1 addresses Dec 5 05:14:48 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/host Dec 5 05:14:48 localhost podman[322673]: 2025-12-05 10:14:48.998051349 +0000 UTC m=+0.112315920 container kill 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 5 05:14:48 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/opts Dec 5 05:14:49 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:49.253 2 INFO neutron.agent.securitygroups_rpc [None req-8953229a-9500-4eaa-91a9-0656e1e69b95 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['3b13fc28-6bef-461a-b25f-c885640f870a']#033[00m Dec 5 05:14:49 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:49.304 261902 INFO neutron.agent.dhcp.agent [None req-b11bbd51-aaa6-4056-a6b6-6957b96a2769 - - - - - -] DHCP configuration for ports {'ed3be4f5-ad82-4c79-b70a-9b5e95332811'} is completed#033[00m Dec 5 05:14:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 613 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 86 KiB/s rd, 26 MiB/s wr, 126 op/s Dec 5 05:14:49 localhost podman[239519]: time="2025-12-05T10:14:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:14:49 localhost podman[239519]: @ - - [05/Dec/2025:10:14:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159759 "" "Go-http-client/1.1" Dec 5 05:14:49 localhost podman[239519]: @ - - [05/Dec/2025:10:14:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20219 "" "Go-http-client/1.1" Dec 5 05:14:50 localhost nova_compute[280228]: 2025-12-05 10:14:50.101 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:14:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:14:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:14:50 localhost podman[322695]: 2025-12-05 10:14:50.225947315 +0000 UTC m=+0.092353978 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 5 05:14:50 localhost podman[322694]: 2025-12-05 10:14:50.286966993 +0000 UTC m=+0.154197042 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:14:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:50.290 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:14:48Z, description=, device_id=5d4b4894-3319-4abc-b826-a288e4e446f6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ed3be4f5-ad82-4c79-b70a-9b5e95332811, ip_allocation=immediate, mac_address=fa:16:3e:31:27:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:44Z, description=, dns_domain=, id=2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1375099133, port_security_enabled=True, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=62925, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2951, status=ACTIVE, subnets=['aea8373e-41f2-4975-8142-c71f1d7f37f4'], tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:45Z, vlan_transparent=None, network_id=2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, port_security_enabled=False, project_id=ecb85ff3c88d49d6b771a6e34a36ee4c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2966, status=DOWN, tags=[], tenant_id=ecb85ff3c88d49d6b771a6e34a36ee4c, updated_at=2025-12-05T10:14:48Z on network 2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a#033[00m Dec 5 05:14:50 localhost podman[322694]: 2025-12-05 10:14:50.32146132 +0000 UTC m=+0.188691279 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:14:50 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:14:50 localhost podman[322696]: 2025-12-05 10:14:50.342361239 +0000 UTC m=+0.198262462 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:14:50 localhost podman[322696]: 2025-12-05 10:14:50.355754879 +0000 UTC m=+0.211656102 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:14:50 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:14:50 localhost podman[322695]: 2025-12-05 10:14:50.4089973 +0000 UTC m=+0.275403913 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:14:50 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:14:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:50.448 2 INFO neutron.agent.securitygroups_rpc [None req-acff6c3c-20c2-42e9-8367-8bf60c08998e 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:50 localhost podman[322771]: 2025-12-05 10:14:50.528223559 +0000 UTC m=+0.060572054 container kill 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:14:50 localhost dnsmasq[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/addn_hosts - 1 addresses Dec 5 05:14:50 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/host Dec 5 05:14:50 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/opts Dec 5 05:14:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:14:50.761 261902 INFO neutron.agent.dhcp.agent [None req-2398db52-fd24-42bc-afd7-04432a5490ed - - - - - -] DHCP configuration for ports {'ed3be4f5-ad82-4c79-b70a-9b5e95332811'} is completed#033[00m Dec 5 05:14:50 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:50.775 2 INFO neutron.agent.securitygroups_rpc [None req-b4d6abfd-89cc-4b34-8e57-eddd9213293a 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:51.217 2 INFO neutron.agent.securitygroups_rpc [None req-00e7fa0b-d6ce-4a61-9e9d-6e5fdfe76d15 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 709 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 3.3 MiB/s rd, 23 MiB/s wr, 99 op/s Dec 5 05:14:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:51.427 2 INFO neutron.agent.securitygroups_rpc [None req-55df744d-37b9-4d9b-82c5-600a5711382e 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:51.629 2 INFO neutron.agent.securitygroups_rpc [None req-3d510268-4f8b-42df-9063-a848385b1789 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:51.767 2 INFO neutron.agent.securitygroups_rpc [None req-08fa5ba3-71c4-467c-9744-a34ca4d77e1c 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:51 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:51.966 2 INFO neutron.agent.securitygroups_rpc [None req-7daa08e9-f9ca-432e-a5fb-eae3684adb76 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:52 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:52.243 2 INFO neutron.agent.securitygroups_rpc [None req-0cf65179-4b03-4637-aa4f-57aa5de6772b 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:52 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:52.469 2 INFO neutron.agent.securitygroups_rpc [None req-b74fea50-bc3a-432f-a098-d6c59815cd89 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:52 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:52.769 2 INFO neutron.agent.securitygroups_rpc [None req-96912d19-e3e5-4dfa-9b21-f05bcc4a1e8c 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['c4e0d255-24dd-4263-b76d-dd557d1f8b9c']#033[00m Dec 5 05:14:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 741 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 4.1 MiB/s rd, 26 MiB/s wr, 96 op/s Dec 5 05:14:54 localhost nova_compute[280228]: 2025-12-05 10:14:54.024 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:55 localhost nova_compute[280228]: 2025-12-05 10:14:55.147 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 741 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 4.1 MiB/s rd, 26 MiB/s wr, 96 op/s Dec 5 05:14:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:14:57 localhost openstack_network_exporter[241668]: ERROR 10:14:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:14:57 localhost openstack_network_exporter[241668]: ERROR 10:14:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:14:57 localhost openstack_network_exporter[241668]: ERROR 10:14:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:14:57 localhost openstack_network_exporter[241668]: ERROR 10:14:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:14:57 localhost openstack_network_exporter[241668]: Dec 5 05:14:57 localhost openstack_network_exporter[241668]: ERROR 10:14:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:14:57 localhost openstack_network_exporter[241668]: Dec 5 05:14:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 859 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 3.7 MiB/s rd, 31 MiB/s wr, 138 op/s Dec 5 05:14:57 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 5 05:14:57 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:57.689 2 INFO neutron.agent.securitygroups_rpc [None req-5b721f9b-4734-47df-b750-a96ce7d95809 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['2d77f4a6-d9e8-4008-9e9c-4856540450f6']#033[00m Dec 5 05:14:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:14:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:14:58 localhost podman[322793]: 2025-12-05 10:14:58.211749362 +0000 UTC m=+0.090370048 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 5 05:14:58 localhost podman[322794]: 2025-12-05 10:14:58.267677534 +0000 UTC m=+0.139536543 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:14:58 localhost podman[322794]: 2025-12-05 10:14:58.282632082 +0000 UTC m=+0.154491061 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:14:58 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:14:58 localhost podman[322793]: 2025-12-05 10:14:58.33317212 +0000 UTC m=+0.211792796 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:14:58 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:14:58 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:58.671 2 INFO neutron.agent.securitygroups_rpc [None req-ae0c85e9-ac65-4efd-bde1-c55d644e4787 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['6a825810-26b1-4733-b049-419f3c6eaa8b']#033[00m Dec 5 05:14:58 localhost neutron_sriov_agent[254996]: 2025-12-05 10:14:58.943 2 INFO neutron.agent.securitygroups_rpc [None req-eb36a368-79c3-4dae-a106-964d225968bf 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['6a825810-26b1-4733-b049-419f3c6eaa8b']#033[00m Dec 5 05:14:58 localhost nova_compute[280228]: 2025-12-05 10:14:58.944 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:59 localhost nova_compute[280228]: 2025-12-05 10:14:59.026 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:14:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 859 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 3.5 MiB/s rd, 18 MiB/s wr, 89 op/s Dec 5 05:15:00 localhost nova_compute[280228]: 2025-12-05 10:15:00.150 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:00 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 1008 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 3.5 MiB/s rd, 29 MiB/s wr, 134 op/s Dec 5 05:15:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 1010 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 819 KiB/s rd, 21 MiB/s wr, 98 op/s Dec 5 05:15:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:03.921 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:15:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:03.921 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:15:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:03.922 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:15:04 localhost nova_compute[280228]: 2025-12-05 10:15:04.058 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:04 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:04.671 261902 INFO neutron.agent.linux.ip_lib [None req-b1b68b18-85c2-4910-8ce8-97e36d1518a0 - - - - - -] Device tap59006ef9-b1 cannot be used as it has no MAC address#033[00m Dec 5 05:15:04 localhost nova_compute[280228]: 2025-12-05 10:15:04.704 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:04 localhost kernel: device tap59006ef9-b1 entered promiscuous mode Dec 5 05:15:04 localhost ovn_controller[153000]: 2025-12-05T10:15:04Z|00367|binding|INFO|Claiming lport 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 for this chassis. Dec 5 05:15:04 localhost ovn_controller[153000]: 2025-12-05T10:15:04Z|00368|binding|INFO|59006ef9-b1ff-4d81-8b97-f6bd9f93ff31: Claiming unknown Dec 5 05:15:04 localhost nova_compute[280228]: 2025-12-05 10:15:04.722 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:04 localhost NetworkManager[5960]: [1764929704.7249] manager: (tap59006ef9-b1): new Generic device (/org/freedesktop/NetworkManager/Devices/61) Dec 5 05:15:04 localhost systemd-udevd[322851]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost ovn_controller[153000]: 2025-12-05T10:15:04Z|00369|binding|INFO|Setting lport 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 ovn-installed in OVS Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost nova_compute[280228]: 2025-12-05 10:15:04.758 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost journal[228791]: ethtool ioctl error on tap59006ef9-b1: No such device Dec 5 05:15:04 localhost nova_compute[280228]: 2025-12-05 10:15:04.790 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:04 localhost nova_compute[280228]: 2025-12-05 10:15:04.816 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:05 localhost ovn_controller[153000]: 2025-12-05T10:15:05Z|00370|binding|INFO|Setting lport 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 up in Southbound Dec 5 05:15:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:05.126 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-40eb8fcf-73ef-4263-9385-03681ae183ee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40eb8fcf-73ef-4263-9385-03681ae183ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9ecfd3b522f49d0af3f986588ac20ec', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3f5f0bf-6f05-4bcc-9bd1-2da9ca237fb5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=59006ef9-b1ff-4d81-8b97-f6bd9f93ff31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:15:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:05.128 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 in datapath 40eb8fcf-73ef-4263-9385-03681ae183ee bound to our chassis#033[00m Dec 5 05:15:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:05.131 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port e4d714b0-d94c-4b27-9060-c174ccba3a5e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:15:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:05.131 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40eb8fcf-73ef-4263-9385-03681ae183ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:15:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:05.132 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[b2de6bd6-6555-4662-ae8a-f87a6a587a06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:15:05 localhost nova_compute[280228]: 2025-12-05 10:15:05.195 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 1010 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 62 KiB/s rd, 18 MiB/s wr, 97 op/s Dec 5 05:15:05 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:05.517 2 INFO neutron.agent.securitygroups_rpc [None req-0e5293fa-6190-4c30-acc0-1d137f88da19 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['e53a6f68-d93a-4c1f-b949-d3e9b2de74b1']#033[00m Dec 5 05:15:05 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:05.724 2 INFO neutron.agent.securitygroups_rpc [None req-e790b3b3-b7d1-4362-aafa-fb0bf024c01a 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['e53a6f68-d93a-4c1f-b949-d3e9b2de74b1']#033[00m Dec 5 05:15:05 localhost podman[322921]: Dec 5 05:15:05 localhost podman[322921]: 2025-12-05 10:15:05.777322564 +0000 UTC m=+0.082218189 container create 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:15:05 localhost systemd[1]: Started libpod-conmon-3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd.scope. Dec 5 05:15:05 localhost systemd[1]: tmp-crun.hBX0iQ.mount: Deactivated successfully. Dec 5 05:15:05 localhost podman[322921]: 2025-12-05 10:15:05.743653463 +0000 UTC m=+0.048549118 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:15:05 localhost systemd[1]: Started libcrun container. Dec 5 05:15:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74a1e6e5632613886d2e23ed51a05e10614c92c0fed93418b02ae0f0c887852/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:15:05 localhost podman[322921]: 2025-12-05 10:15:05.884856846 +0000 UTC m=+0.189752491 container init 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:15:05 localhost podman[322921]: 2025-12-05 10:15:05.897812253 +0000 UTC m=+0.202707868 container start 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:15:05 localhost dnsmasq[322940]: started, version 2.85 cachesize 150 Dec 5 05:15:05 localhost dnsmasq[322940]: DNS service limited to local subnets Dec 5 05:15:05 localhost dnsmasq[322940]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:15:05 localhost dnsmasq[322940]: warning: no upstream servers configured Dec 5 05:15:05 localhost dnsmasq-dhcp[322940]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:15:05 localhost dnsmasq[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/addn_hosts - 0 addresses Dec 5 05:15:05 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/host Dec 5 05:15:05 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/opts Dec 5 05:15:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:06.080 261902 INFO neutron.agent.dhcp.agent [None req-d856c494-deb8-4247-8455-d09a94ab3cab - - - - - -] DHCP configuration for ports {'48e6e3f1-72bf-4f90-bdf0-d01fcc63d13a'} is completed#033[00m Dec 5 05:15:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e203 do_prune osdmap full prune enabled Dec 5 05:15:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e204 e204: 6 total, 6 up, 6 in Dec 5 05:15:06 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e204: 6 total, 6 up, 6 in Dec 5 05:15:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:15:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1568474033' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:15:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:15:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1568474033' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:15:06 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:06.394 2 INFO neutron.agent.securitygroups_rpc [None req-562688ca-98a5-44ea-a36a-018c1f537f62 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']#033[00m Dec 5 05:15:06 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:06.629 2 INFO neutron.agent.securitygroups_rpc [None req-f454cbbc-1bd4-4c89-8f88-4d0686edf94e 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']#033[00m Dec 5 05:15:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 78 KiB/s rd, 25 MiB/s wr, 120 op/s Dec 5 05:15:08 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:08.255 2 INFO neutron.agent.securitygroups_rpc [None req-7a2675cb-7b38-4211-acfb-ffbaf5662b1d 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']#033[00m Dec 5 05:15:08 localhost nova_compute[280228]: 2025-12-05 10:15:08.455 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:08 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:08.700 2 INFO neutron.agent.securitygroups_rpc [None req-82d19382-8b47-4f74-a75a-d5e52337093c 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']#033[00m Dec 5 05:15:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:09.037 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:15:08Z, description=, device_id=81c532cf-d65b-43d4-afac-7978978c4b5f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e983b6bc-5709-4d2b-9300-8b6cc96e62dd, ip_allocation=immediate, mac_address=fa:16:3e:4b:2d:58, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:58Z, description=, dns_domain=, id=40eb8fcf-73ef-4263-9385-03681ae183ee, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-462468981-network, port_security_enabled=True, project_id=d9ecfd3b522f49d0af3f986588ac20ec, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15208, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3031, status=ACTIVE, subnets=['4859d964-ee95-41eb-87f0-590e301b5d6a'], tags=[], tenant_id=d9ecfd3b522f49d0af3f986588ac20ec, updated_at=2025-12-05T10:14:58Z, vlan_transparent=None, network_id=40eb8fcf-73ef-4263-9385-03681ae183ee, port_security_enabled=False, project_id=d9ecfd3b522f49d0af3f986588ac20ec, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3063, status=DOWN, tags=[], tenant_id=d9ecfd3b522f49d0af3f986588ac20ec, updated_at=2025-12-05T10:15:08Z on network 40eb8fcf-73ef-4263-9385-03681ae183ee#033[00m Dec 5 05:15:09 localhost nova_compute[280228]: 2025-12-05 10:15:09.084 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e204 do_prune osdmap full prune enabled Dec 5 05:15:09 localhost systemd[1]: tmp-crun.Des6jv.mount: Deactivated successfully. Dec 5 05:15:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e205 e205: 6 total, 6 up, 6 in Dec 5 05:15:09 localhost dnsmasq[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/addn_hosts - 1 addresses Dec 5 05:15:09 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/host Dec 5 05:15:09 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/opts Dec 5 05:15:09 localhost podman[322957]: 2025-12-05 10:15:09.264298168 +0000 UTC m=+0.075444981 container kill 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:15:09 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e205: 6 total, 6 up, 6 in Dec 5 05:15:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:09.317 2 INFO neutron.agent.securitygroups_rpc [None req-121fcc22-4c23-49fd-9195-231305396d34 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']#033[00m Dec 5 05:15:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 1.0 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 54 KiB/s rd, 15 MiB/s wr, 82 op/s Dec 5 05:15:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:09.575 261902 INFO neutron.agent.dhcp.agent [None req-da0f3321-84d1-4a26-bd35-dddc281fea37 - - - - - -] DHCP configuration for ports {'e983b6bc-5709-4d2b-9300-8b6cc96e62dd'} is completed#033[00m Dec 5 05:15:09 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:09.772 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:15:08Z, description=, device_id=81c532cf-d65b-43d4-afac-7978978c4b5f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e983b6bc-5709-4d2b-9300-8b6cc96e62dd, ip_allocation=immediate, mac_address=fa:16:3e:4b:2d:58, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:14:58Z, description=, dns_domain=, id=40eb8fcf-73ef-4263-9385-03681ae183ee, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-462468981-network, port_security_enabled=True, project_id=d9ecfd3b522f49d0af3f986588ac20ec, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15208, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3031, status=ACTIVE, subnets=['4859d964-ee95-41eb-87f0-590e301b5d6a'], tags=[], tenant_id=d9ecfd3b522f49d0af3f986588ac20ec, updated_at=2025-12-05T10:14:58Z, vlan_transparent=None, network_id=40eb8fcf-73ef-4263-9385-03681ae183ee, port_security_enabled=False, project_id=d9ecfd3b522f49d0af3f986588ac20ec, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3063, status=DOWN, tags=[], tenant_id=d9ecfd3b522f49d0af3f986588ac20ec, updated_at=2025-12-05T10:15:08Z on network 40eb8fcf-73ef-4263-9385-03681ae183ee#033[00m Dec 5 05:15:09 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:09.886 2 INFO neutron.agent.securitygroups_rpc [None req-f71bc6c2-560f-406b-a2c3-c442701bff84 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['df98d59c-274f-41ef-9f84-8cccddaebba9']#033[00m Dec 5 05:15:10 localhost dnsmasq[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/addn_hosts - 1 addresses Dec 5 05:15:10 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/host Dec 5 05:15:10 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/opts Dec 5 05:15:10 localhost podman[322996]: 2025-12-05 10:15:10.020439458 +0000 UTC m=+0.070270752 container kill 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:15:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:15:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:15:10 localhost podman[323010]: 2025-12-05 10:15:10.160426295 +0000 UTC m=+0.102015655 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, version=9.6, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:15:10 localhost nova_compute[280228]: 2025-12-05 10:15:10.212 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:10 localhost podman[323010]: 2025-12-05 10:15:10.228004395 +0000 UTC m=+0.169593795 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container) Dec 5 05:15:10 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:15:10 localhost podman[323009]: 2025-12-05 10:15:10.229019825 +0000 UTC m=+0.174116172 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:15:10 localhost podman[323009]: 2025-12-05 10:15:10.312353837 +0000 UTC m=+0.257450134 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 05:15:10 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:15:10 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:10.451 261902 INFO neutron.agent.dhcp.agent [None req-2c4e6ca2-69ed-425c-a802-a6616ce75635 - - - - - -] DHCP configuration for ports {'e983b6bc-5709-4d2b-9300-8b6cc96e62dd'} is completed#033[00m Dec 5 05:15:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 84 KiB/s rd, 26 MiB/s wr, 124 op/s Dec 5 05:15:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:15:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d52e7204-a858-4168-9b05-d7f96da9d351, vol_name:cephfs) < "" Dec 5 05:15:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d52e7204-a858-4168-9b05-d7f96da9d351/.meta.tmp' Dec 5 05:15:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d52e7204-a858-4168-9b05-d7f96da9d351/.meta.tmp' to config b'/volumes/_nogroup/d52e7204-a858-4168-9b05-d7f96da9d351/.meta' Dec 5 05:15:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d52e7204-a858-4168-9b05-d7f96da9d351, vol_name:cephfs) < "" Dec 5 05:15:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "format": "json"}]: dispatch Dec 5 05:15:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d52e7204-a858-4168-9b05-d7f96da9d351, vol_name:cephfs) < "" Dec 5 05:15:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d52e7204-a858-4168-9b05-d7f96da9d351, vol_name:cephfs) < "" Dec 5 05:15:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:15:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:15:11 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:11.629 2 INFO neutron.agent.securitygroups_rpc [None req-0c4cfa16-1548-4253-961f-4264df7b6723 7675612818bb4a849dd9b183cc24c445 ab84998c5444416ba79d0d8022ce1320 - - default default] Security group rule updated ['9c106a56-f234-42ba-b5ee-f23da94e12de']#033[00m Dec 5 05:15:12 localhost nova_compute[280228]: 2025-12-05 10:15:12.124 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:15:12 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:15:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:15:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:15:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:15:12 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:15:12 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 2787c168-0135-44cf-8de8-429147af982b (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:15:12 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 2787c168-0135-44cf-8de8-429147af982b (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:15:12 localhost ceph-mgr[286454]: [progress INFO root] Completed event 2787c168-0135-44cf-8de8-429147af982b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:15:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:15:12 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:15:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:15:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:15:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 87 KiB/s rd, 30 MiB/s wr, 130 op/s Dec 5 05:15:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e205 do_prune osdmap full prune enabled Dec 5 05:15:14 localhost nova_compute[280228]: 2025-12-05 10:15:14.128 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e206 e206: 6 total, 6 up, 6 in Dec 5 05:15:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e206: 6 total, 6 up, 6 in Dec 5 05:15:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:15:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:15:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:15:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:15:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:15:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:15:15 localhost nova_compute[280228]: 2025-12-05 10:15:15.253 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 34 KiB/s rd, 15 MiB/s wr, 49 op/s Dec 5 05:15:15 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:15:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:15:15 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:15:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e206 do_prune osdmap full prune enabled Dec 5 05:15:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e207 e207: 6 total, 6 up, 6 in Dec 5 05:15:15 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e207: 6 total, 6 up, 6 in Dec 5 05:15:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:15:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, vol_name:cephfs) < "" Dec 5 05:15:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39eff584-c9b5-4dbb-b4e1-af4206cc68e9/.meta.tmp' Dec 5 05:15:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39eff584-c9b5-4dbb-b4e1-af4206cc68e9/.meta.tmp' to config b'/volumes/_nogroup/39eff584-c9b5-4dbb-b4e1-af4206cc68e9/.meta' Dec 5 05:15:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, vol_name:cephfs) < "" Dec 5 05:15:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "format": "json"}]: dispatch Dec 5 05:15:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, vol_name:cephfs) < "" Dec 5 05:15:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, vol_name:cephfs) < "" Dec 5 05:15:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:15:16 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:15:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:15:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e207 do_prune osdmap full prune enabled Dec 5 05:15:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e208 e208: 6 total, 6 up, 6 in Dec 5 05:15:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e208: 6 total, 6 up, 6 in Dec 5 05:15:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 46 KiB/s rd, 16 MiB/s wr, 71 op/s Dec 5 05:15:17 localhost ovn_controller[153000]: 2025-12-05T10:15:17Z|00371|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:15:17 localhost nova_compute[280228]: 2025-12-05 10:15:17.959 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:19 localhost nova_compute[280228]: 2025-12-05 10:15:19.161 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 41 KiB/s rd, 11 MiB/s wr, 62 op/s Dec 5 05:15:19 localhost ovn_controller[153000]: 2025-12-05T10:15:19Z|00372|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:15:19 localhost nova_compute[280228]: 2025-12-05 10:15:19.765 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "format": "json"}]: dispatch Dec 5 05:15:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:19 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:15:19.785+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39eff584-c9b5-4dbb-b4e1-af4206cc68e9' of type subvolume Dec 5 05:15:19 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39eff584-c9b5-4dbb-b4e1-af4206cc68e9' of type subvolume Dec 5 05:15:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39eff584-c9b5-4dbb-b4e1-af4206cc68e9", "force": true, "format": "json"}]: dispatch Dec 5 05:15:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, vol_name:cephfs) < "" Dec 5 05:15:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/39eff584-c9b5-4dbb-b4e1-af4206cc68e9'' moved to trashcan Dec 5 05:15:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:15:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39eff584-c9b5-4dbb-b4e1-af4206cc68e9, vol_name:cephfs) < "" Dec 5 05:15:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:15:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3134142781' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:15:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:15:19 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3134142781' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:15:19 localhost podman[239519]: time="2025-12-05T10:15:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:15:19 localhost podman[239519]: @ - - [05/Dec/2025:10:15:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161583 "" "Go-http-client/1.1" Dec 5 05:15:19 localhost podman[239519]: @ - - [05/Dec/2025:10:15:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20701 "" "Go-http-client/1.1" Dec 5 05:15:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:15:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, vol_name:cephfs) < "" Dec 5 05:15:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3/.meta.tmp' Dec 5 05:15:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3/.meta.tmp' to config b'/volumes/_nogroup/4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3/.meta' Dec 5 05:15:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, vol_name:cephfs) < "" Dec 5 05:15:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "format": "json"}]: dispatch Dec 5 05:15:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, vol_name:cephfs) < "" Dec 5 05:15:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, vol_name:cephfs) < "" Dec 5 05:15:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:15:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:15:20 localhost nova_compute[280228]: 2025-12-05 10:15:20.285 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:20 localhost dnsmasq[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/addn_hosts - 0 addresses Dec 5 05:15:20 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/host Dec 5 05:15:20 localhost dnsmasq-dhcp[322656]: read /var/lib/neutron/dhcp/2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a/opts Dec 5 05:15:20 localhost podman[323156]: 2025-12-05 10:15:20.458580374 +0000 UTC m=+0.066863218 container kill 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:15:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:15:20 localhost nova_compute[280228]: 2025-12-05 10:15:20.523 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:20 localhost nova_compute[280228]: 2025-12-05 10:15:20.552 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:20 localhost systemd[1]: tmp-crun.s14onG.mount: Deactivated successfully. Dec 5 05:15:20 localhost podman[323171]: 2025-12-05 10:15:20.640845155 +0000 UTC m=+0.149342335 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 5 05:15:20 localhost podman[323168]: 2025-12-05 10:15:20.595697461 +0000 UTC m=+0.114249168 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:15:20 localhost podman[323171]: 2025-12-05 10:15:20.674289629 +0000 UTC m=+0.182786789 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 5 05:15:20 localhost ovn_controller[153000]: 2025-12-05T10:15:20Z|00373|binding|INFO|Releasing lport 4bcac450-746f-4219-a816-5b956f8b429b from this chassis (sb_readonly=0) Dec 5 05:15:20 localhost kernel: device tap4bcac450-74 left promiscuous mode Dec 5 05:15:20 localhost ovn_controller[153000]: 2025-12-05T10:15:20Z|00374|binding|INFO|Setting lport 4bcac450-746f-4219-a816-5b956f8b429b down in Southbound Dec 5 05:15:20 localhost nova_compute[280228]: 2025-12-05 10:15:20.681 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:20 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:15:20 localhost podman[323170]: 2025-12-05 10:15:20.689506844 +0000 UTC m=+0.200454799 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 5 05:15:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:20.690 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9189846b-a1cc-47d6-8ff6-6e567fa94818, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4bcac450-746f-4219-a816-5b956f8b429b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:15:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:20.691 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 4bcac450-746f-4219-a816-5b956f8b429b in datapath 2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a unbound from our chassis#033[00m Dec 5 05:15:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:20.692 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:15:20 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:20.693 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[ee39dd09-2cc8-4d78-be1a-4b856200550f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:15:20 localhost nova_compute[280228]: 2025-12-05 10:15:20.699 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:20 localhost podman[323170]: 2025-12-05 10:15:20.701868022 +0000 UTC m=+0.212816007 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:15:20 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:15:20 localhost podman[323168]: 2025-12-05 10:15:20.731098988 +0000 UTC m=+0.249650695 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:15:20 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:15:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:21 localhost dnsmasq[322656]: exiting on receipt of SIGTERM Dec 5 05:15:21 localhost podman[323252]: 2025-12-05 10:15:21.357973412 +0000 UTC m=+0.071481770 container kill 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 5 05:15:21 localhost systemd[1]: libpod-143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890.scope: Deactivated successfully. Dec 5 05:15:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 373 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 8.9 MiB/s wr, 157 op/s Dec 5 05:15:21 localhost podman[323266]: 2025-12-05 10:15:21.429424879 +0000 UTC m=+0.050131436 container died 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:15:21 localhost systemd[1]: var-lib-containers-storage-overlay-69828b63d2a2c22183305f1052156dd3bc8accdb6ae6ebf188f5e3fd36e1cd97-merged.mount: Deactivated successfully. Dec 5 05:15:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890-userdata-shm.mount: Deactivated successfully. Dec 5 05:15:21 localhost podman[323266]: 2025-12-05 10:15:21.464692549 +0000 UTC m=+0.085399056 container cleanup 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:15:21 localhost systemd[1]: libpod-conmon-143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890.scope: Deactivated successfully. Dec 5 05:15:21 localhost nova_compute[280228]: 2025-12-05 10:15:21.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:21 localhost podman[323268]: 2025-12-05 10:15:21.518744354 +0000 UTC m=+0.134207400 container remove 143cab87893b51589e061cd017b6e5865dde8bbf0a9cd123922bb1df28a87890 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2460cf42-9ff1-4b35-86b3-7f5dbfd6f76a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.145 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.145 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.145 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.146 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.146 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:15:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:15:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3057450815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.542 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.631 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.632 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:15:22 localhost systemd[1]: run-netns-qdhcp\x2d2460cf42\x2d9ff1\x2d4b35\x2d86b3\x2d7f5dbfd6f76a.mount: Deactivated successfully. Dec 5 05:15:22 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:22.648 261902 INFO neutron.agent.dhcp.agent [None req-0a9ef8b2-64b5-45b2-aafc-53b45e5fceb4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:22 localhost dnsmasq[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/addn_hosts - 0 addresses Dec 5 05:15:22 localhost systemd[1]: tmp-crun.iwN75f.mount: Deactivated successfully. Dec 5 05:15:22 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/host Dec 5 05:15:22 localhost dnsmasq-dhcp[322940]: read /var/lib/neutron/dhcp/40eb8fcf-73ef-4263-9385-03681ae183ee/opts Dec 5 05:15:22 localhost podman[323333]: 2025-12-05 10:15:22.678761901 +0000 UTC m=+0.073988416 container kill 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.851 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.853 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11102MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.853 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.853 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.923 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.924 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.925 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:15:22 localhost nova_compute[280228]: 2025-12-05 10:15:22.975 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:15:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "format": "json"}]: dispatch Dec 5 05:15:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d52e7204-a858-4168-9b05-d7f96da9d351, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d52e7204-a858-4168-9b05-d7f96da9d351, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:23 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:15:23.011+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd52e7204-a858-4168-9b05-d7f96da9d351' of type subvolume Dec 5 05:15:23 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd52e7204-a858-4168-9b05-d7f96da9d351' of type subvolume Dec 5 05:15:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d52e7204-a858-4168-9b05-d7f96da9d351", "force": true, "format": "json"}]: dispatch Dec 5 05:15:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d52e7204-a858-4168-9b05-d7f96da9d351, vol_name:cephfs) < "" Dec 5 05:15:23 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d52e7204-a858-4168-9b05-d7f96da9d351'' moved to trashcan Dec 5 05:15:23 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:15:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d52e7204-a858-4168-9b05-d7f96da9d351, vol_name:cephfs) < "" Dec 5 05:15:23 localhost ovn_controller[153000]: 2025-12-05T10:15:23Z|00375|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:15:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:23.220 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.230 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:23 localhost kernel: device tap59006ef9-b1 left promiscuous mode Dec 5 05:15:23 localhost ovn_controller[153000]: 2025-12-05T10:15:23Z|00376|binding|INFO|Releasing lport 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 from this chassis (sb_readonly=0) Dec 5 05:15:23 localhost ovn_controller[153000]: 2025-12-05T10:15:23Z|00377|binding|INFO|Setting lport 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 down in Southbound Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.244 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:23.256 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-40eb8fcf-73ef-4263-9385-03681ae183ee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40eb8fcf-73ef-4263-9385-03681ae183ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9ecfd3b522f49d0af3f986588ac20ec', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3f5f0bf-6f05-4bcc-9bd1-2da9ca237fb5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=59006ef9-b1ff-4d81-8b97-f6bd9f93ff31) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:15:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:23.260 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 59006ef9-b1ff-4d81-8b97-f6bd9f93ff31 in datapath 40eb8fcf-73ef-4263-9385-03681ae183ee unbound from our chassis#033[00m Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.263 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:23.265 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40eb8fcf-73ef-4263-9385-03681ae183ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:15:23 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:23.267 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[2253feef-f570-4486-ab9a-893c1f8ec889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:15:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:15:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3296104708' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:15:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:15:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3296104708' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:15:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 8.0 MiB/s wr, 147 op/s Dec 5 05:15:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:15:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2145583600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.535 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.540 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.557 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.559 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:15:23 localhost nova_compute[280228]: 2025-12-05 10:15:23.559 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:15:23 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:23.710 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:15:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2342090555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:15:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:15:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2342090555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:15:24 localhost nova_compute[280228]: 2025-12-05 10:15:24.199 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "format": "json"}]: dispatch Dec 5 05:15:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:24 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:15:24.603+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3' of type subvolume Dec 5 05:15:24 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3' of type subvolume Dec 5 05:15:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3", "force": true, "format": "json"}]: dispatch Dec 5 05:15:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, vol_name:cephfs) < "" Dec 5 05:15:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3'' moved to trashcan Dec 5 05:15:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:15:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ce43f6f-a923-4ddb-bda3-6d6c4c5d13d3, vol_name:cephfs) < "" Dec 5 05:15:25 localhost dnsmasq[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/addn_hosts - 0 addresses Dec 5 05:15:25 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/host Dec 5 05:15:25 localhost podman[323394]: 2025-12-05 10:15:25.134231763 +0000 UTC m=+0.061461642 container kill 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:15:25 localhost dnsmasq-dhcp[322483]: read /var/lib/neutron/dhcp/a0a9b67e-778c-4f9f-a1ce-1cf98d562a33/opts Dec 5 05:15:25 localhost ovn_controller[153000]: 2025-12-05T10:15:25Z|00378|binding|INFO|Releasing lport ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 from this chassis (sb_readonly=0) Dec 5 05:15:25 localhost ovn_controller[153000]: 2025-12-05T10:15:25Z|00379|binding|INFO|Setting lport ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 down in Southbound Dec 5 05:15:25 localhost nova_compute[280228]: 2025-12-05 10:15:25.343 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:25 localhost kernel: device tapba18f1c2-a0 left promiscuous mode Dec 5 05:15:25 localhost nova_compute[280228]: 2025-12-05 10:15:25.345 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 5 05:15:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:25.350 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ecb85ff3c88d49d6b771a6e34a36ee4c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3228b8fc-d0b0-4559-bc59-54be1da8ae6a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:15:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:25.352 158820 INFO neutron.agent.ovn.metadata.agent [-] Port ba18f1c2-a0d8-42ae-9f65-abe0fd27f2f2 in datapath a0a9b67e-778c-4f9f-a1ce-1cf98d562a33 unbound from our chassis#033[00m Dec 5 05:15:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:25.354 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:15:25 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:25.355 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[70823a75-b351-47ac-99fa-d7aef78b5dc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:15:25 localhost nova_compute[280228]: 2025-12-05 10:15:25.369 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 197 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 6.8 MiB/s wr, 125 op/s Dec 5 05:15:25 localhost ovn_controller[153000]: 2025-12-05T10:15:25Z|00380|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:15:25 localhost nova_compute[280228]: 2025-12-05 10:15:25.633 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:25 localhost dnsmasq[322483]: exiting on receipt of SIGTERM Dec 5 05:15:25 localhost systemd[1]: tmp-crun.xWWWJX.mount: Deactivated successfully. Dec 5 05:15:25 localhost podman[323435]: 2025-12-05 10:15:25.939356844 +0000 UTC m=+0.069908872 container kill 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:15:25 localhost systemd[1]: libpod-9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b.scope: Deactivated successfully. Dec 5 05:15:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e208 do_prune osdmap full prune enabled Dec 5 05:15:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e209 e209: 6 total, 6 up, 6 in Dec 5 05:15:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e209: 6 total, 6 up, 6 in Dec 5 05:15:26 localhost podman[323459]: 2025-12-05 10:15:26.025370768 +0000 UTC m=+0.071768489 container died 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:15:26 localhost podman[323459]: 2025-12-05 10:15:26.06235652 +0000 UTC m=+0.108754211 container cleanup 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:15:26 localhost systemd[1]: libpod-conmon-9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b.scope: Deactivated successfully. Dec 5 05:15:26 localhost podman[323465]: 2025-12-05 10:15:26.105129349 +0000 UTC m=+0.139920075 container remove 9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0a9b67e-778c-4f9f-a1ce-1cf98d562a33, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:15:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:26.127 261902 INFO neutron.agent.dhcp.agent [None req-344a4074-669c-4791-b6df-8bcdc64f5700 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:26 localhost systemd[1]: tmp-crun.YH3hQq.mount: Deactivated successfully. Dec 5 05:15:26 localhost systemd[1]: var-lib-containers-storage-overlay-27a3d74553558f0484f1186800269b771d273e8e11e7b6b03d2fce208a2e6950-merged.mount: Deactivated successfully. Dec 5 05:15:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9019cde27da5b2c318928ce36fd496ac03714335d03ef9b6ce128d964834278b-userdata-shm.mount: Deactivated successfully. Dec 5 05:15:26 localhost systemd[1]: run-netns-qdhcp\x2da0a9b67e\x2d778c\x2d4f9f\x2da1ce\x2d1cf98d562a33.mount: Deactivated successfully. Dec 5 05:15:26 localhost dnsmasq[322940]: exiting on receipt of SIGTERM Dec 5 05:15:26 localhost podman[323486]: 2025-12-05 10:15:26.149354643 +0000 UTC m=+0.118023424 container kill 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:15:26 localhost systemd[1]: libpod-3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd.scope: Deactivated successfully. Dec 5 05:15:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:26.172 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:26 localhost podman[323506]: 2025-12-05 10:15:26.220435449 +0000 UTC m=+0.058798351 container died 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 5 05:15:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:15:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:636df4c8-dabc-4041-93c0-674c71210a5e, vol_name:cephfs) < "" Dec 5 05:15:26 localhost podman[323506]: 2025-12-05 10:15:26.303467202 +0000 UTC m=+0.141830094 container cleanup 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:15:26 localhost systemd[1]: libpod-conmon-3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd.scope: Deactivated successfully. Dec 5 05:15:26 localhost podman[323513]: 2025-12-05 10:15:26.330959744 +0000 UTC m=+0.154972967 container remove 3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40eb8fcf-73ef-4263-9385-03681ae183ee, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:15:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/636df4c8-dabc-4041-93c0-674c71210a5e/.meta.tmp' Dec 5 05:15:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/636df4c8-dabc-4041-93c0-674c71210a5e/.meta.tmp' to config b'/volumes/_nogroup/636df4c8-dabc-4041-93c0-674c71210a5e/.meta' Dec 5 05:15:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:636df4c8-dabc-4041-93c0-674c71210a5e, vol_name:cephfs) < "" Dec 5 05:15:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "format": "json"}]: dispatch Dec 5 05:15:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:636df4c8-dabc-4041-93c0-674c71210a5e, vol_name:cephfs) < "" Dec 5 05:15:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:26.362 261902 INFO neutron.agent.dhcp.agent [None req-59d04463-5e4c-4a3b-bf2e-56648cc72f88 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:636df4c8-dabc-4041-93c0-674c71210a5e, vol_name:cephfs) < "" Dec 5 05:15:26 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:15:26.363 261902 INFO neutron.agent.dhcp.agent [None req-59d04463-5e4c-4a3b-bf2e-56648cc72f88 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:15:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:15:26 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:15:26 localhost nova_compute[280228]: 2025-12-05 10:15:26.560 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:27 localhost systemd[1]: var-lib-containers-storage-overlay-f74a1e6e5632613886d2e23ed51a05e10614c92c0fed93418b02ae0f0c887852-merged.mount: Deactivated successfully. Dec 5 05:15:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e7114488a6a6598abcec1e0feaee7f55502fca5f7748546880c91bc2e7653dd-userdata-shm.mount: Deactivated successfully. Dec 5 05:15:27 localhost systemd[1]: run-netns-qdhcp\x2d40eb8fcf\x2d73ef\x2d4263\x2d9385\x2d03681ae183ee.mount: Deactivated successfully. Dec 5 05:15:27 localhost openstack_network_exporter[241668]: ERROR 10:15:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:15:27 localhost openstack_network_exporter[241668]: ERROR 10:15:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:15:27 localhost openstack_network_exporter[241668]: ERROR 10:15:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:15:27 localhost openstack_network_exporter[241668]: ERROR 10:15:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:15:27 localhost openstack_network_exporter[241668]: Dec 5 05:15:27 localhost openstack_network_exporter[241668]: ERROR 10:15:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:15:27 localhost openstack_network_exporter[241668]: Dec 5 05:15:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 197 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 47 KiB/s wr, 116 op/s Dec 5 05:15:27 localhost nova_compute[280228]: 2025-12-05 10:15:27.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:27 localhost nova_compute[280228]: 2025-12-05 10:15:27.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:15:27 localhost nova_compute[280228]: 2025-12-05 10:15:27.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:15:28 localhost nova_compute[280228]: 2025-12-05 10:15:28.388 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:15:28 localhost nova_compute[280228]: 2025-12-05 10:15:28.388 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:15:28 localhost nova_compute[280228]: 2025-12-05 10:15:28.389 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:15:28 localhost nova_compute[280228]: 2025-12-05 10:15:28.389 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:15:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:15:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.227 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:29 localhost podman[323535]: 2025-12-05 10:15:29.259544321 +0000 UTC m=+0.139731380 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:15:29 localhost podman[323536]: 2025-12-05 10:15:29.239888479 +0000 UTC m=+0.117096267 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:15:29 localhost podman[323535]: 2025-12-05 10:15:29.317573118 +0000 UTC m=+0.197760177 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:15:29 localhost podman[323536]: 2025-12-05 10:15:29.324537821 +0000 UTC m=+0.201745609 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:15:29 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:15:29 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:15:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 197 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 47 KiB/s wr, 116 op/s Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.431 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.452 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.452 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.453 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.454 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.454 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "format": "json"}]: dispatch Dec 5 05:15:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:636df4c8-dabc-4041-93c0-674c71210a5e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:636df4c8-dabc-4041-93c0-674c71210a5e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:29 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:15:29.597+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '636df4c8-dabc-4041-93c0-674c71210a5e' of type subvolume Dec 5 05:15:29 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '636df4c8-dabc-4041-93c0-674c71210a5e' of type subvolume Dec 5 05:15:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "636df4c8-dabc-4041-93c0-674c71210a5e", "force": true, "format": "json"}]: dispatch Dec 5 05:15:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:636df4c8-dabc-4041-93c0-674c71210a5e, vol_name:cephfs) < "" Dec 5 05:15:29 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/636df4c8-dabc-4041-93c0-674c71210a5e'' moved to trashcan Dec 5 05:15:29 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:15:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:636df4c8-dabc-4041-93c0-674c71210a5e, vol_name:cephfs) < "" Dec 5 05:15:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:29.616 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:15:29 localhost nova_compute[280228]: 2025-12-05 10:15:29.616 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:29.618 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:15:30 localhost nova_compute[280228]: 2025-12-05 10:15:30.367 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 198 MiB data, 1003 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 43 KiB/s wr, 42 op/s Dec 5 05:15:31 localhost nova_compute[280228]: 2025-12-05 10:15:31.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:15:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:15:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta.tmp' Dec 5 05:15:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta.tmp' to config b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta' Dec 5 05:15:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "format": "json"}]: dispatch Dec 5 05:15:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:15:31 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:15:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 198 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 43 KiB/s wr, 39 op/s Dec 5 05:15:34 localhost nova_compute[280228]: 2025-12-05 10:15:34.263 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "snap_name": "74f67cad-1d8c-4e22-8727-85e4c5d4ccb8", "format": "json"}]: dispatch Dec 5 05:15:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:74f67cad-1d8c-4e22-8727-85e4c5d4ccb8, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:35 localhost neutron_sriov_agent[254996]: 2025-12-05 10:15:35.063 2 INFO neutron.agent.securitygroups_rpc [None req-1518dd7f-186b-4c5f-9ddd-a06eab8a04f6 8c95b42a11ae4b2ca59be36067b7e35c c2c66bea319748f696485854e7041763 - - default default] Security group rule updated ['3d5a4cde-c54d-4be1-a363-4165dbdf4da7']#033[00m Dec 5 05:15:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:74f67cad-1d8c-4e22-8727-85e4c5d4ccb8, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 198 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 43 KiB/s wr, 39 op/s Dec 5 05:15:35 localhost nova_compute[280228]: 2025-12-05 10:15:35.410 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 55 KiB/s wr, 64 op/s Dec 5 05:15:38 localhost ovn_metadata_agent[158815]: 2025-12-05 10:15:38.620 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:15:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "snap_name": "74f67cad-1d8c-4e22-8727-85e4c5d4ccb8_f3c8f45f-f48a-4e3d-9a84-9ca89467d072", "force": true, "format": "json"}]: dispatch Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:74f67cad-1d8c-4e22-8727-85e4c5d4ccb8_f3c8f45f-f48a-4e3d-9a84-9ca89467d072, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta.tmp' Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta.tmp' to config b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta' Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:74f67cad-1d8c-4e22-8727-85e4c5d4ccb8_f3c8f45f-f48a-4e3d-9a84-9ca89467d072, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "snap_name": "74f67cad-1d8c-4e22-8727-85e4c5d4ccb8", "force": true, "format": "json"}]: dispatch Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:74f67cad-1d8c-4e22-8727-85e4c5d4ccb8, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta.tmp' Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta.tmp' to config b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1/.meta' Dec 5 05:15:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:74f67cad-1d8c-4e22-8727-85e4c5d4ccb8, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:39 localhost nova_compute[280228]: 2025-12-05 10:15:39.266 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 31 KiB/s wr, 32 op/s Dec 5 05:15:40 localhost nova_compute[280228]: 2025-12-05 10:15:40.447 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:15:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:15:41 localhost podman[323584]: 2025-12-05 10:15:41.193175863 +0000 UTC m=+0.070445388 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9) Dec 5 05:15:41 localhost podman[323584]: 2025-12-05 10:15:41.207753349 +0000 UTC m=+0.085022844 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7) Dec 5 05:15:41 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:15:41 localhost systemd[1]: tmp-crun.oYQjQx.mount: Deactivated successfully. Dec 5 05:15:41 localhost podman[323583]: 2025-12-05 10:15:41.30806039 +0000 UTC m=+0.188657347 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:15:41 localhost podman[323583]: 2025-12-05 10:15:41.316706675 +0000 UTC m=+0.197303622 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0) Dec 5 05:15:41 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:15:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 41 KiB/s wr, 34 op/s Dec 5 05:15:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 27 KiB/s wr, 31 op/s Dec 5 05:15:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "format": "json"}]: dispatch Dec 5 05:15:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:15:44.164+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b3da48b-bbf5-413c-bf47-8fad11e652a1' of type subvolume Dec 5 05:15:44 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b3da48b-bbf5-413c-bf47-8fad11e652a1' of type subvolume Dec 5 05:15:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b3da48b-bbf5-413c-bf47-8fad11e652a1", "force": true, "format": "json"}]: dispatch Dec 5 05:15:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7b3da48b-bbf5-413c-bf47-8fad11e652a1'' moved to trashcan Dec 5 05:15:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:15:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b3da48b-bbf5-413c-bf47-8fad11e652a1, vol_name:cephfs) < "" Dec 5 05:15:44 localhost nova_compute[280228]: 2025-12-05 10:15:44.270 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:45 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84c28651-3e0c-41ba-8232-c0a4261265e5, vol_name:cephfs) < "" Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/84c28651-3e0c-41ba-8232-c0a4261265e5/.meta.tmp' Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/84c28651-3e0c-41ba-8232-c0a4261265e5/.meta.tmp' to config b'/volumes/_nogroup/84c28651-3e0c-41ba-8232-c0a4261265e5/.meta' Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84c28651-3e0c-41ba-8232-c0a4261265e5, vol_name:cephfs) < "" Dec 5 05:15:45 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "format": "json"}]: dispatch Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:84c28651-3e0c-41ba-8232-c0a4261265e5, vol_name:cephfs) < "" Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:84c28651-3e0c-41ba-8232-c0a4261265e5, vol_name:cephfs) < "" Dec 5 05:15:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:15:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:15:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:15:45 Dec 5 05:15:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:15:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:15:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'images', 'manila_metadata', 'manila_data', 'volumes'] Dec 5 05:15:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:15:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:15:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 198 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 26 KiB/s wr, 30 op/s Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:15:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00043611794504932067 of space, bias 4.0, pg target 0.34714988425925924 quantized to 16 (current 16) Dec 5 05:15:45 localhost nova_compute[280228]: 2025-12-05 10:15:45.478 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:15:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:15:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e209 do_prune osdmap full prune enabled Dec 5 05:15:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e210 e210: 6 total, 6 up, 6 in Dec 5 05:15:47 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e210: 6 total, 6 up, 6 in Dec 5 05:15:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s Dec 5 05:15:49 localhost nova_compute[280228]: 2025-12-05 10:15:49.274 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 37 KiB/s wr, 4 op/s Dec 5 05:15:49 localhost podman[239519]: time="2025-12-05T10:15:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:15:49 localhost podman[239519]: @ - - [05/Dec/2025:10:15:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:15:49 localhost podman[239519]: @ - - [05/Dec/2025:10:15:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19278 "" "Go-http-client/1.1" Dec 5 05:15:50 localhost nova_compute[280228]: 2025-12-05 10:15:50.481 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "format": "json"}]: dispatch Dec 5 05:15:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:84c28651-3e0c-41ba-8232-c0a4261265e5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:84c28651-3e0c-41ba-8232-c0a4261265e5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:15:51 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:15:51.100+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84c28651-3e0c-41ba-8232-c0a4261265e5' of type subvolume Dec 5 05:15:51 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84c28651-3e0c-41ba-8232-c0a4261265e5' of type subvolume Dec 5 05:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:15:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "84c28651-3e0c-41ba-8232-c0a4261265e5", "force": true, "format": "json"}]: dispatch Dec 5 05:15:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:84c28651-3e0c-41ba-8232-c0a4261265e5, vol_name:cephfs) < "" Dec 5 05:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:15:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:15:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/84c28651-3e0c-41ba-8232-c0a4261265e5'' moved to trashcan Dec 5 05:15:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:15:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:84c28651-3e0c-41ba-8232-c0a4261265e5, vol_name:cephfs) < "" Dec 5 05:15:51 localhost podman[323622]: 2025-12-05 10:15:51.205774868 +0000 UTC m=+0.085210369 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:15:51 localhost podman[323622]: 2025-12-05 10:15:51.250116546 +0000 UTC m=+0.129552107 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 5 05:15:51 localhost podman[323623]: 2025-12-05 10:15:51.270927393 +0000 UTC m=+0.147161527 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 5 05:15:51 localhost podman[323621]: 2025-12-05 10:15:51.243007948 +0000 UTC m=+0.129860487 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:15:51 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:15:51 localhost podman[323621]: 2025-12-05 10:15:51.322133701 +0000 UTC m=+0.208986220 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:15:51 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:15:51 localhost podman[323623]: 2025-12-05 10:15:51.356025609 +0000 UTC m=+0.232259753 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 5 05:15:51 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:15:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 3 op/s Dec 5 05:15:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 3 op/s Dec 5 05:15:54 localhost nova_compute[280228]: 2025-12-05 10:15:54.277 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 36 KiB/s wr, 3 op/s Dec 5 05:15:55 localhost nova_compute[280228]: 2025-12-05 10:15:55.483 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:15:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e210 do_prune osdmap full prune enabled Dec 5 05:15:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e211 e211: 6 total, 6 up, 6 in Dec 5 05:15:56 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e211: 6 total, 6 up, 6 in Dec 5 05:15:57 localhost openstack_network_exporter[241668]: ERROR 10:15:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:15:57 localhost openstack_network_exporter[241668]: ERROR 10:15:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:15:57 localhost openstack_network_exporter[241668]: ERROR 10:15:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:15:57 localhost openstack_network_exporter[241668]: ERROR 10:15:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:15:57 localhost openstack_network_exporter[241668]: Dec 5 05:15:57 localhost openstack_network_exporter[241668]: ERROR 10:15:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:15:57 localhost openstack_network_exporter[241668]: Dec 5 05:15:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 6.4 KiB/s rd, 22 KiB/s wr, 11 op/s Dec 5 05:15:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e211 do_prune osdmap full prune enabled Dec 5 05:15:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e212 e212: 6 total, 6 up, 6 in Dec 5 05:15:57 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e212: 6 total, 6 up, 6 in Dec 5 05:15:59 localhost nova_compute[280228]: 2025-12-05 10:15:59.281 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:15:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 198 MiB data, 1013 MiB used, 41 GiB / 42 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 13 op/s Dec 5 05:15:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e212 do_prune osdmap full prune enabled Dec 5 05:15:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e213 e213: 6 total, 6 up, 6 in Dec 5 05:15:59 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e213: 6 total, 6 up, 6 in Dec 5 05:15:59 localhost ovn_controller[153000]: 2025-12-05T10:15:59Z|00381|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory Dec 5 05:16:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:16:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:16:00 localhost systemd[1]: tmp-crun.m3oW8s.mount: Deactivated successfully. Dec 5 05:16:00 localhost podman[323679]: 2025-12-05 10:16:00.201122729 +0000 UTC m=+0.084554570 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 5 05:16:00 localhost podman[323679]: 2025-12-05 10:16:00.230734916 +0000 UTC m=+0.114166737 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:16:00 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:16:00 localhost podman[323680]: 2025-12-05 10:16:00.30993879 +0000 UTC m=+0.188910354 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:16:00 localhost podman[323680]: 2025-12-05 10:16:00.320908426 +0000 UTC m=+0.199880020 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:16:00 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:16:00 localhost nova_compute[280228]: 2025-12-05 10:16:00.485 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 198 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 24 KiB/s wr, 116 op/s Dec 5 05:16:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e213 do_prune osdmap full prune enabled Dec 5 05:16:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e214 e214: 6 total, 6 up, 6 in Dec 5 05:16:01 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e214: 6 total, 6 up, 6 in Dec 5 05:16:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:16:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e214 do_prune osdmap full prune enabled Dec 5 05:16:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' Dec 5 05:16:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta' Dec 5 05:16:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e215 e215: 6 total, 6 up, 6 in Dec 5 05:16:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:16:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "format": "json"}]: dispatch Dec 5 05:16:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:16:02 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e215: 6 total, 6 up, 6 in Dec 5 05:16:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:16:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:02 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 198 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 6.3 KiB/s wr, 108 op/s Dec 5 05:16:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e215 do_prune osdmap full prune enabled Dec 5 05:16:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e216 e216: 6 total, 6 up, 6 in Dec 5 05:16:03 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e216: 6 total, 6 up, 6 in Dec 5 05:16:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:16:03.922 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:16:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:16:03.922 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:16:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:16:03.923 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:16:04 localhost nova_compute[280228]: 2025-12-05 10:16:04.286 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 198 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 6.3 KiB/s wr, 109 op/s Dec 5 05:16:05 localhost nova_compute[280228]: 2025-12-05 10:16:05.488 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e216 do_prune osdmap full prune enabled Dec 5 05:16:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e217 e217: 6 total, 6 up, 6 in Dec 5 05:16:05 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e217: 6 total, 6 up, 6 in Dec 5 05:16:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e", "format": "json"}]: dispatch Dec 5 05:16:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:16:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:16:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e217 do_prune osdmap full prune enabled Dec 5 05:16:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e218 e218: 6 total, 6 up, 6 in Dec 5 05:16:07 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e218: 6 total, 6 up, 6 in Dec 5 05:16:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 198 MiB data, 1018 MiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 32 KiB/s wr, 107 op/s Dec 5 05:16:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e218 do_prune osdmap full prune enabled Dec 5 05:16:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e219 e219: 6 total, 6 up, 6 in Dec 5 05:16:08 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e219: 6 total, 6 up, 6 in Dec 5 05:16:09 localhost nova_compute[280228]: 2025-12-05 10:16:09.289 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 198 MiB data, 1018 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s Dec 5 05:16:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:16:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5165 writes, 34K keys, 5165 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 5165 writes, 5165 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2190 writes, 9787 keys, 2190 commit groups, 1.0 writes per commit group, ingest: 9.27 MB, 0.02 MB/s#012Interval WAL: 2190 writes, 2190 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 132.6 0.32 0.11 17 0.019 0 0 0.0 0.0#012 L6 1/0 15.98 MB 0.0 0.3 0.0 0.2 0.3 0.0 0.0 6.3 164.7 150.0 1.78 0.69 16 0.111 195K 8226 0.0 0.0#012 Sum 1/0 15.98 MB 0.0 0.3 0.0 0.2 0.3 0.1 0.0 7.3 139.8 147.4 2.10 0.81 33 0.064 195K 8226 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 12.9 171.8 170.2 0.62 0.31 12 0.052 79K 3189 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.2 0.3 0.0 0.0 0.0 164.7 150.0 1.78 0.69 16 0.111 195K 8226 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 133.6 0.32 0.11 16 0.020 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.041, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.30 GB write, 0.26 MB/s write, 0.29 GB read, 0.24 MB/s read, 2.1 seconds#012Interval compaction: 0.10 GB write, 0.18 MB/s write, 0.10 GB read, 0.18 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56443b711350#2 capacity: 304.00 MB usage: 41.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000287 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2619,40.54 MB,13.3347%) FilterBlock(33,550.11 KB,0.176716%) IndexBlock(33,703.73 KB,0.226066%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 5 05:16:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e", "target_sub_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, target_sub_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, vol_name:cephfs) < "" Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta' Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 675e3bc7-b3eb-46fb-9f71-c297b6be4c6e for path b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba' Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta' Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, target_sub_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, vol_name:cephfs) < "" Dec 5 05:16:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:09.982+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:09.982+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:09.982+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:09.982+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:09.982+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba Dec 5 05:16:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, d483e3c7-2e03-4510-b92a-d90a49e04bba) Dec 5 05:16:10 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:10.018+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:10.018+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:10.018+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:10.018+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:10.018+0000 7f9975046640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:16:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, d483e3c7-2e03-4510-b92a-d90a49e04bba) -- by 0 seconds Dec 5 05:16:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' Dec 5 05:16:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta' Dec 5 05:16:10 localhost nova_compute[280228]: 2025-12-05 10:16:10.490 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e219 do_prune osdmap full prune enabled Dec 5 05:16:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e220 e220: 6 total, 6 up, 6 in Dec 5 05:16:10 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e220: 6 total, 6 up, 6 in Dec 5 05:16:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e220 do_prune osdmap full prune enabled Dec 5 05:16:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 53 KiB/s wr, 143 op/s Dec 5 05:16:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e221 e221: 6 total, 6 up, 6 in Dec 5 05:16:11 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e221: 6 total, 6 up, 6 in Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.snap/aa63b329-df82-4a19-aa39-a051e214eb1e/3bf3c189-954d-4686-86c0-afa347669d63' to b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/e50266c0-c453-4498-a9e6-e68b2e507fdc' Dec 5 05:16:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea920704-3133-4d40-a979-346396e08bfd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea920704-3133-4d40-a979-346396e08bfd, vol_name:cephfs) < "" Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea920704-3133-4d40-a979-346396e08bfd/.meta.tmp' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea920704-3133-4d40-a979-346396e08bfd/.meta.tmp' to config b'/volumes/_nogroup/ea920704-3133-4d40-a979-346396e08bfd/.meta' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea920704-3133-4d40-a979-346396e08bfd, vol_name:cephfs) < "" Dec 5 05:16:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea920704-3133-4d40-a979-346396e08bfd", "format": "json"}]: dispatch Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea920704-3133-4d40-a979-346396e08bfd, vol_name:cephfs) < "" Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.clone_index] untracking 675e3bc7-b3eb-46fb-9f71-c297b6be4c6e Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta.tmp' to config b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba/.meta' Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, d483e3c7-2e03-4510-b92a-d90a49e04bba) Dec 5 05:16:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea920704-3133-4d40-a979-346396e08bfd, vol_name:cephfs) < "" Dec 5 05:16:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:11 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e51: np0005546419.zhsnqq(active, since 14m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:16:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:16:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:16:12 localhost podman[323751]: 2025-12-05 10:16:12.219483255 +0000 UTC m=+0.091742530 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Dec 5 05:16:12 localhost podman[323750]: 2025-12-05 10:16:12.269358562 +0000 UTC m=+0.145117385 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 05:16:12 localhost podman[323751]: 2025-12-05 10:16:12.289105646 +0000 UTC m=+0.161364941 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public) Dec 5 05:16:12 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:16:12 localhost podman[323750]: 2025-12-05 10:16:12.310945045 +0000 UTC m=+0.186703898 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:16:12 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:16:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e221 do_prune osdmap full prune enabled Dec 5 05:16:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e222 e222: 6 total, 6 up, 6 in Dec 5 05:16:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e222: 6 total, 6 up, 6 in Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.953 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.954 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.960 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b16e393e-50dd-498b-9d1a-1bb72eb8c13c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:12.954661', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d3690da-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '73f3ddb99896403871e14b3e502dbb55c3c41aad12e411dd2b72c12b6e62545d'}]}, 'timestamp': '2025-12-05 10:16:12.960882', '_unique_id': '9741dde90c44406eaf69d94d5ad58183'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.963 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.963 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b8d4727-c170-48a0-82bb-63e16e195f69', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:12.963721', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d3714b0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '1a7eda9650c35ee7a012e363f46ad7ba9c2e7c0e276e14c0c481c6cb6c018686'}]}, 'timestamp': '2025-12-05 10:16:12.964203', '_unique_id': 'e594bc74a55b4dc0b8964954292b45cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.965 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.966 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.966 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'eac18295-ee56-4235-9298-8847fda9496d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:12.966470', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d377ffe-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '972b9972820f4cabe3dfb4439faaec6c1729cfad2942392972dd324976a8f74f'}]}, 'timestamp': '2025-12-05 10:16:12.966974', '_unique_id': '5a598d943fbf439b93b1e9858c256071'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.969 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.991 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a6f7a4d-1ccb-4f57-bc8c-12cbe865532e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:16:12.969799', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '6d3b4cb0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.165352543, 'message_signature': '9d965d4f49a97cfd12aa68067f4272e0854dfa9ff977e4e223441a8bbe24530c'}]}, 'timestamp': '2025-12-05 10:16:12.991907', '_unique_id': 'fb492056f37e45898771123e9dc61b26'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.993 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:12.994 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.022 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8e3143ec-afb0-433c-a891-ea19c19ab40a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:12.994510', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d400bce-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '25277153c8d8011a5fc0afbe24a918da64a2d1503faba503c59d11e75b949304'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:12.994510', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d401628-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': 'ae0869ed8060f51a21b34f9f45dba3bc6a2af35fbb7d177804ab76ae1af47d70'}]}, 'timestamp': '2025-12-05 10:16:13.023080', '_unique_id': '13a07814f74e4dbfaa8526b7105bd42d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.023 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.024 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.024 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'faca9f46-cf19-4226-ba4b-d4539d07bb39', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.024445', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d4053b8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '49a1c1cebcf206d4099807fcbc95fb680a9ab5eadf1936a1abaca1b4bd58bd29'}]}, 'timestamp': '2025-12-05 10:16:13.024661', '_unique_id': 'd44c9a68f3764ef591c4a730a4c5010a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.025 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.037 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f592867e-dba8-4c3b-8f5b-2455385dfe32', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.025608', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d423e4e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.19988604, 'message_signature': '4adae8e6350ccb69ccd118d5b173d4df07b9e4fa33f861cd0ddf3dc3a6b95d13'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.025608', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d4253d4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.19988604, 'message_signature': '7fc3ae4acffe5724b6f261376252a674309093c65cb3bf8d50580995651ab142'}]}, 'timestamp': '2025-12-05 10:16:13.037891', '_unique_id': 'd3a874bda80c474397495075132e3c29'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.040 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.040 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 18900000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69250b24-ea92-446d-be4c-da9de7131dda', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18900000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:16:13.040465', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '6d42cba2-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.165352543, 'message_signature': '30f0953c821dd2b4971f059058cfb24455ec645611688547b91a754ad8bcbe79'}]}, 'timestamp': '2025-12-05 10:16:13.040973', '_unique_id': '7e63ebc2540246a3a8e39afafdc1388b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.043 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5370592d-0058-44f1-b976-14929c48dc11', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.043347', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d433b82-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '6c462dfe26e99db07072e7df185df6af671cf8494c93d6c6c844814ab05d5d8b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.043347', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d435dce-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': 'ca93703b29c85d53e7e7ff69a0fc3afd8caccddb38b7aa840b98874b1efd2d8e'}]}, 'timestamp': '2025-12-05 10:16:13.044703', '_unique_id': '6bb6cce498e04828b5a41468fe75543a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ddbc87a5-5a59-4103-99e7-c9d0333954e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.047183', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d43d3e4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.19988604, 'message_signature': 'e36e21f1fc114b3c6a9efd92de0881d6de34740ce28a98a62228541a2d8c87de'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.047183', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d43e5a0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.19988604, 'message_signature': 'd1564ba079c4f61a81567932b7e0a810ff7e01fde923507ba6ad5c81aef42166'}]}, 'timestamp': '2025-12-05 10:16:13.048172', '_unique_id': '37cec5babb6340a5b3470ff76fb1bbe1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.050 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.051 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.051 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f5df470-6096-4db1-8e37-59e5e290e714', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.050996', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d446872-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.19988604, 'message_signature': 'e3c7e80e09e633f34382c6f7276acf61c9244adf9bce8ca684b5aa61650cd160'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.050996', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d447cae-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.19988604, 'message_signature': '026b0aa822e356d4e7ab0c020aae17c6008cdfa87e4f8d8556f4369b7a666918'}]}, 'timestamp': '2025-12-05 10:16:13.052037', '_unique_id': 'e16d2e78443e49b38950422ec9f9a11e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.053 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.054 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.055 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.055 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77447f92-c690-4a9e-87f5-cdf90f9ec41d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.055327', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d45168c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '7510ec07d6cc88a03bf7a17fac78a9f2d0fbf25ba8dd762e1701218b6b755273'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.055327', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d453298-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': 'ff78f31f2505f01e84fdbccf451f00317c6d3c8ec54425833cb665dd55e60296'}]}, 'timestamp': '2025-12-05 10:16:13.056779', '_unique_id': 'e35826c5dc754385b3a4321b0b25282f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.057 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.059 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.059 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97d62ebf-ca7f-47ca-b2ba-90980a9f1516', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.059330', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d45ae44-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '986a03a803046c36c1fd3799d53c44037c2663b424eea453c18c9ce8e4c5ffe8'}]}, 'timestamp': '2025-12-05 10:16:13.059896', '_unique_id': '78825bfb22a4442e99e035a13c314147'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.062 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '076b6a2a-6d7f-4428-a016-d6f67e90f0b2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.062300', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d462392-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '8d4cbd15e40b23b3d2697c894ebc2ea60efa456ac9c25408cc79b920aa4e1e7a'}]}, 'timestamp': '2025-12-05 10:16:13.062903', '_unique_id': '28dbc7ca102347a58779a2eff66a8733'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.063 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.065 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.065 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd083c799-c163-4d61-bff1-11950cdb35b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.065675', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d46a3b2-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '2921b0c5d7c0d6fe979f406f0afbbdd363657492d20808f53801a1ce7d47774f'}]}, 'timestamp': '2025-12-05 10:16:13.066159', '_unique_id': '1a44d08252344951bd591cfa32acfe09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.068 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.068 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.068 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1c6cd67-8c0b-47df-9e45-47048253d87f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.068444', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d470ece-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '079df5fc419e68c5436f1a2769d7d8195be87f963bcfd9b189b7841dc8425f0c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.068444', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d471ed2-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '59d48c3402424e3c2681cb3ad9c5b6f664d522e402582232ac20c49ad23cf9ac'}]}, 'timestamp': '2025-12-05 10:16:13.069317', '_unique_id': '3d717c3b96d44c969e59d54a859f032c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.071 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd43de410-8e51-4b25-bbd6-ec85bb0665ec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.071650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d478cf0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '087685221cd87d15bfc4035da31f332075e3560e8ed4c93fc371285ba5dd9755'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.071650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d479e3e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '715f99a6ec676b73fc1a87815ae4706e613f7e0d53503cd27cb771f43d3baf9e'}]}, 'timestamp': '2025-12-05 10:16:13.072541', '_unique_id': '2fa2a774f8444f1e97f7d611827bf79b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.074 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f4468f7b-625a-4eea-94b4-498adcc66c19', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.074702', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d480356-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': 'a756ae8109613e12ed48a4b945340f480d817f858e925ef926282f2b8474c6e7'}]}, 'timestamp': '2025-12-05 10:16:13.075156', '_unique_id': '4ccf59716d1c450b82b522adad6869c0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.077 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.077 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.077 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.077 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'caf03f8e-e608-4749-8164-a412603cb802', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.077336', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d4866d4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '8652a7151da80be89cdc8dbf8a4d7aab671d61d3069cdbbf74597b419d3acfe2'}]}, 'timestamp': '2025-12-05 10:16:13.077618', '_unique_id': '5b68fb9e06ff4a218136287f4b24cded'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.078 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.079 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f91fcf2e-2994-4a90-84a2-1e4703904dc9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:16:13.078898', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6d48a3ce-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': 'e2c43fc060e0153ba82ec7999f84636d190752789ed8374db86ff891e6b26020'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:16:13.078898', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6d48ae64-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.168821689, 'message_signature': '554dec7457503c2f58f4a46556ffdc902c3b0ebbd4e5f828ee876a084d4df971'}]}, 'timestamp': '2025-12-05 10:16:13.079434', '_unique_id': 'fc5a84cefdff4b14bbc4fddbe76acaa5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d41e798-e271-44ca-bd65-7078ce14c89c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:16:13.080840', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '6d48f590-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 12888.128966859, 'message_signature': '3bfd61970f47e47b04ac00a6b4b75c34aedb0dd63f2d31eecb26578d383ca736'}]}, 'timestamp': '2025-12-05 10:16:13.081299', '_unique_id': '552503072fd240eebd4bab56a6148b60'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:16:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:16:13.081 12 ERROR oslo_messaging.notify.messaging Dec 5 05:16:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 32 KiB/s wr, 73 op/s Dec 5 05:16:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:16:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:16:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:16:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:16:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:16:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:16:13 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 87ec51e6-2d04-4e66-8559-f197f63e7f2c (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:16:13 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 87ec51e6-2d04-4e66-8559-f197f63e7f2c (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:16:13 localhost ceph-mgr[286454]: [progress INFO root] Completed event 87ec51e6-2d04-4e66-8559-f197f63e7f2c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:16:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:16:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:16:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e222 do_prune osdmap full prune enabled Dec 5 05:16:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:16:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:16:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e223 e223: 6 total, 6 up, 6 in Dec 5 05:16:13 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e223: 6 total, 6 up, 6 in Dec 5 05:16:14 localhost nova_compute[280228]: 2025-12-05 10:16:14.310 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, vol_name:cephfs) < "" Dec 5 05:16:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2ab3d121-5033-4ae4-bc88-7df1d583d342/.meta.tmp' Dec 5 05:16:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2ab3d121-5033-4ae4-bc88-7df1d583d342/.meta.tmp' to config b'/volumes/_nogroup/2ab3d121-5033-4ae4-bc88-7df1d583d342/.meta' Dec 5 05:16:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, vol_name:cephfs) < "" Dec 5 05:16:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "format": "json"}]: dispatch Dec 5 05:16:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, vol_name:cephfs) < "" Dec 5 05:16:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, vol_name:cephfs) < "" Dec 5 05:16:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e223 do_prune osdmap full prune enabled Dec 5 05:16:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e224 e224: 6 total, 6 up, 6 in Dec 5 05:16:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e224: 6 total, 6 up, 6 in Dec 5 05:16:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta.tmp' Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta.tmp' to config b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta' Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "format": "json"}]: dispatch Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:16:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:16:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.2 KiB/s rd, 511 B/s wr, 9 op/s Dec 5 05:16:15 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:16:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:16:15 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:16:15 localhost nova_compute[280228]: 2025-12-05 10:16:15.538 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e224 do_prune osdmap full prune enabled Dec 5 05:16:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:16:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e225 e225: 6 total, 6 up, 6 in Dec 5 05:16:15 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e225: 6 total, 6 up, 6 in Dec 5 05:16:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e225 do_prune osdmap full prune enabled Dec 5 05:16:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e226 e226: 6 total, 6 up, 6 in Dec 5 05:16:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e226: 6 total, 6 up, 6 in Dec 5 05:16:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 100 KiB/s wr, 152 op/s Dec 5 05:16:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e226 do_prune osdmap full prune enabled Dec 5 05:16:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e227 e227: 6 total, 6 up, 6 in Dec 5 05:16:18 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e227: 6 total, 6 up, 6 in Dec 5 05:16:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "snap_name": "ac528f21-d490-4e97-bdf5-2a4687cb644d", "format": "json"}]: dispatch Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ac528f21-d490-4e97-bdf5-2a4687cb644d, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ac528f21-d490-4e97-bdf5-2a4687cb644d, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "format": "json"}]: dispatch Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:18.722+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2ab3d121-5033-4ae4-bc88-7df1d583d342' of type subvolume Dec 5 05:16:18 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2ab3d121-5033-4ae4-bc88-7df1d583d342' of type subvolume Dec 5 05:16:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2ab3d121-5033-4ae4-bc88-7df1d583d342", "force": true, "format": "json"}]: dispatch Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, vol_name:cephfs) < "" Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2ab3d121-5033-4ae4-bc88-7df1d583d342'' moved to trashcan Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2ab3d121-5033-4ae4-bc88-7df1d583d342, vol_name:cephfs) < "" Dec 5 05:16:19 localhost nova_compute[280228]: 2025-12-05 10:16:19.341 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 89 KiB/s wr, 135 op/s Dec 5 05:16:19 localhost podman[239519]: time="2025-12-05T10:16:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:16:19 localhost podman[239519]: @ - - [05/Dec/2025:10:16:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:16:19 localhost podman[239519]: @ - - [05/Dec/2025:10:16:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19277 "" "Go-http-client/1.1" Dec 5 05:16:20 localhost nova_compute[280228]: 2025-12-05 10:16:20.575 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e227 do_prune osdmap full prune enabled Dec 5 05:16:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e228 e228: 6 total, 6 up, 6 in Dec 5 05:16:21 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e228: 6 total, 6 up, 6 in Dec 5 05:16:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 117 KiB/s rd, 107 KiB/s wr, 172 op/s Dec 5 05:16:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:16:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4205473615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:16:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:16:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4205473615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:16:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, vol_name:cephfs) < "" Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/89b8d3a9-86b3-485b-aa83-b34d33e5ba52/.meta.tmp' Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/89b8d3a9-86b3-485b-aa83-b34d33e5ba52/.meta.tmp' to config b'/volumes/_nogroup/89b8d3a9-86b3-485b-aa83-b34d33e5ba52/.meta' Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, vol_name:cephfs) < "" Dec 5 05:16:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "format": "json"}]: dispatch Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, vol_name:cephfs) < "" Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, vol_name:cephfs) < "" Dec 5 05:16:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:16:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:16:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "snap_name": "ac528f21-d490-4e97-bdf5-2a4687cb644d_366ab011-0a32-4ef6-b163-23098f6262cd", "force": true, "format": "json"}]: dispatch Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ac528f21-d490-4e97-bdf5-2a4687cb644d_366ab011-0a32-4ef6-b163-23098f6262cd, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta.tmp' Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta.tmp' to config b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta' Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ac528f21-d490-4e97-bdf5-2a4687cb644d_366ab011-0a32-4ef6-b163-23098f6262cd, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "snap_name": "ac528f21-d490-4e97-bdf5-2a4687cb644d", "force": true, "format": "json"}]: dispatch Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ac528f21-d490-4e97-bdf5-2a4687cb644d, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:22 localhost systemd[1]: tmp-crun.P43YYK.mount: Deactivated successfully. Dec 5 05:16:22 localhost podman[323879]: 2025-12-05 10:16:22.207550658 +0000 UTC m=+0.085400576 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta.tmp' Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta.tmp' to config b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5/.meta' Dec 5 05:16:22 localhost podman[323879]: 2025-12-05 10:16:22.241649882 +0000 UTC m=+0.119499790 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:16:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ac528f21-d490-4e97-bdf5-2a4687cb644d, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:22 localhost podman[323878]: 2025-12-05 10:16:22.25561722 +0000 UTC m=+0.137308226 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:16:22 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:16:22 localhost podman[323878]: 2025-12-05 10:16:22.268775623 +0000 UTC m=+0.150466669 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:16:22 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:16:22 localhost podman[323880]: 2025-12-05 10:16:22.224382733 +0000 UTC m=+0.098820587 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 5 05:16:22 localhost podman[323880]: 2025-12-05 10:16:22.355180857 +0000 UTC m=+0.229618731 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:16:22 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.530 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.531 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.531 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:16:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:16:22 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2946632018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:16:22 localhost nova_compute[280228]: 2025-12-05 10:16:22.992 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.055 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.056 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:16:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e228 do_prune osdmap full prune enabled Dec 5 05:16:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e229 e229: 6 total, 6 up, 6 in Dec 5 05:16:23 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e229: 6 total, 6 up, 6 in Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.312 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.313 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11116MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.314 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.314 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.373 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.373 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.373 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:16:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 33 KiB/s wr, 95 op/s Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.433 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:16:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:16:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2970110141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.902 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.908 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.935 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.938 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:16:23 localhost nova_compute[280228]: 2025-12-05 10:16:23.938 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:16:24 localhost nova_compute[280228]: 2025-12-05 10:16:24.344 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e229 do_prune osdmap full prune enabled Dec 5 05:16:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e230 e230: 6 total, 6 up, 6 in Dec 5 05:16:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e230: 6 total, 6 up, 6 in Dec 5 05:16:25 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "format": "json"}]: dispatch Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d2beb6d7-8182-4e32-a746-b76132855fa5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d2beb6d7-8182-4e32-a746-b76132855fa5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:25.312+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd2beb6d7-8182-4e32-a746-b76132855fa5' of type subvolume Dec 5 05:16:25 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd2beb6d7-8182-4e32-a746-b76132855fa5' of type subvolume Dec 5 05:16:25 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d2beb6d7-8182-4e32-a746-b76132855fa5", "force": true, "format": "json"}]: dispatch Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d2beb6d7-8182-4e32-a746-b76132855fa5'' moved to trashcan Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d2beb6d7-8182-4e32-a746-b76132855fa5, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 33 KiB/s wr, 95 op/s Dec 5 05:16:25 localhost nova_compute[280228]: 2025-12-05 10:16:25.577 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:25 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "format": "json"}]: dispatch Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:25.942+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '89b8d3a9-86b3-485b-aa83-b34d33e5ba52' of type subvolume Dec 5 05:16:25 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '89b8d3a9-86b3-485b-aa83-b34d33e5ba52' of type subvolume Dec 5 05:16:25 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "89b8d3a9-86b3-485b-aa83-b34d33e5ba52", "force": true, "format": "json"}]: dispatch Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, vol_name:cephfs) < "" Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/89b8d3a9-86b3-485b-aa83-b34d33e5ba52'' moved to trashcan Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:25 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:89b8d3a9-86b3-485b-aa83-b34d33e5ba52, vol_name:cephfs) < "" Dec 5 05:16:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e230 do_prune osdmap full prune enabled Dec 5 05:16:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e231 e231: 6 total, 6 up, 6 in Dec 5 05:16:26 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e231: 6 total, 6 up, 6 in Dec 5 05:16:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e231 do_prune osdmap full prune enabled Dec 5 05:16:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e232 e232: 6 total, 6 up, 6 in Dec 5 05:16:27 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e232: 6 total, 6 up, 6 in Dec 5 05:16:27 localhost openstack_network_exporter[241668]: ERROR 10:16:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:16:27 localhost openstack_network_exporter[241668]: ERROR 10:16:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:16:27 localhost openstack_network_exporter[241668]: ERROR 10:16:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:16:27 localhost openstack_network_exporter[241668]: ERROR 10:16:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:16:27 localhost openstack_network_exporter[241668]: Dec 5 05:16:27 localhost openstack_network_exporter[241668]: ERROR 10:16:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:16:27 localhost openstack_network_exporter[241668]: Dec 5 05:16:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 96 KiB/s wr, 91 op/s Dec 5 05:16:27 localhost nova_compute[280228]: 2025-12-05 10:16:27.939 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:27 localhost nova_compute[280228]: 2025-12-05 10:16:27.940 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:16:27 localhost nova_compute[280228]: 2025-12-05 10:16:27.940 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.039 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.039 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.040 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.040 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.466 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.489 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.490 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.491 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:28 localhost nova_compute[280228]: 2025-12-05 10:16:28.491 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, vol_name:cephfs) < "" Dec 5 05:16:29 localhost nova_compute[280228]: 2025-12-05 10:16:29.393 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 69 KiB/s wr, 66 op/s Dec 5 05:16:29 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/484de3bf-67b8-4f95-9d4e-1049ce65d043/.meta.tmp' Dec 5 05:16:29 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/484de3bf-67b8-4f95-9d4e-1049ce65d043/.meta.tmp' to config b'/volumes/_nogroup/484de3bf-67b8-4f95-9d4e-1049ce65d043/.meta' Dec 5 05:16:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, vol_name:cephfs) < "" Dec 5 05:16:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "format": "json"}]: dispatch Dec 5 05:16:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, vol_name:cephfs) < "" Dec 5 05:16:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, vol_name:cephfs) < "" Dec 5 05:16:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:16:29.995 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:16:29 localhost ovn_metadata_agent[158815]: 2025-12-05 10:16:29.997 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:16:30 localhost nova_compute[280228]: 2025-12-05 10:16:30.000 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:30 localhost nova_compute[280228]: 2025-12-05 10:16:30.054 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:30 localhost nova_compute[280228]: 2025-12-05 10:16:30.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:30 localhost nova_compute[280228]: 2025-12-05 10:16:30.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:30 localhost nova_compute[280228]: 2025-12-05 10:16:30.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:16:30 localhost nova_compute[280228]: 2025-12-05 10:16:30.619 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e232 do_prune osdmap full prune enabled Dec 5 05:16:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e233 e233: 6 total, 6 up, 6 in Dec 5 05:16:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e233: 6 total, 6 up, 6 in Dec 5 05:16:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:16:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:16:31 localhost podman[323983]: 2025-12-05 10:16:31.21633727 +0000 UTC m=+0.101458039 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:16:31 localhost podman[323983]: 2025-12-05 10:16:31.22779147 +0000 UTC m=+0.112912269 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:16:31 localhost podman[323982]: 2025-12-05 10:16:31.271338243 +0000 UTC m=+0.155855122 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:16:31 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:16:31 localhost podman[323982]: 2025-12-05 10:16:31.315793574 +0000 UTC m=+0.200310433 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller) Dec 5 05:16:31 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:16:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 92 KiB/s wr, 97 op/s Dec 5 05:16:31 localhost nova_compute[280228]: 2025-12-05 10:16:31.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:16:32 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "format": "json"}]: dispatch Dec 5 05:16:32 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:32 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:32 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:32.739+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '484de3bf-67b8-4f95-9d4e-1049ce65d043' of type subvolume Dec 5 05:16:32 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '484de3bf-67b8-4f95-9d4e-1049ce65d043' of type subvolume Dec 5 05:16:32 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "484de3bf-67b8-4f95-9d4e-1049ce65d043", "force": true, "format": "json"}]: dispatch Dec 5 05:16:32 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, vol_name:cephfs) < "" Dec 5 05:16:32 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/484de3bf-67b8-4f95-9d4e-1049ce65d043'' moved to trashcan Dec 5 05:16:32 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:32 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:484de3bf-67b8-4f95-9d4e-1049ce65d043, vol_name:cephfs) < "" Dec 5 05:16:33 localhost ovn_metadata_agent[158815]: 2025-12-05 10:16:33.000 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:16:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 75 KiB/s wr, 80 op/s Dec 5 05:16:34 localhost nova_compute[280228]: 2025-12-05 10:16:34.396 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 199 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 16 KiB/s wr, 23 op/s Dec 5 05:16:35 localhost nova_compute[280228]: 2025-12-05 10:16:35.651 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:35 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:35 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, vol_name:cephfs) < "" Dec 5 05:16:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e233 do_prune osdmap full prune enabled Dec 5 05:16:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e234 e234: 6 total, 6 up, 6 in Dec 5 05:16:36 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/33e10816-9bfc-48f9-9aa2-4d50e3101227/.meta.tmp' Dec 5 05:16:36 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/33e10816-9bfc-48f9-9aa2-4d50e3101227/.meta.tmp' to config b'/volumes/_nogroup/33e10816-9bfc-48f9-9aa2-4d50e3101227/.meta' Dec 5 05:16:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, vol_name:cephfs) < "" Dec 5 05:16:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e234: 6 total, 6 up, 6 in Dec 5 05:16:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "format": "json"}]: dispatch Dec 5 05:16:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, vol_name:cephfs) < "" Dec 5 05:16:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, vol_name:cephfs) < "" Dec 5 05:16:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:36 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 55 KiB/s wr, 53 op/s Dec 5 05:16:39 localhost nova_compute[280228]: 2025-12-05 10:16:39.400 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 52 KiB/s wr, 51 op/s Dec 5 05:16:40 localhost nova_compute[280228]: 2025-12-05 10:16:40.685 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.055821) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801055864, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2133, "num_deletes": 272, "total_data_size": 2378641, "memory_usage": 2421880, "flush_reason": "Manual Compaction"} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801072679, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2323914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33699, "largest_seqno": 35831, "table_properties": {"data_size": 2314637, "index_size": 5654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21601, "raw_average_key_size": 21, "raw_value_size": 2295154, "raw_average_value_size": 2297, "num_data_blocks": 244, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929679, "oldest_key_time": 1764929679, "file_creation_time": 1764929801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 16936 microseconds, and 9092 cpu microseconds. Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.072752) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2323914 bytes OK Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.072782) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.074656) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.074681) EVENT_LOG_v1 {"time_micros": 1764929801074674, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.074704) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2369198, prev total WAL file size 2369522, number of live WAL files 2. Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.075420) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323635' seq:72057594037927935, type:22 .. '6C6F676D0034353138' seq:0, type:0; will stop at (end) Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2269KB)], [60(15MB)] Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801075502, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19082722, "oldest_snapshot_seqno": -1} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 13501 keys, 18752438 bytes, temperature: kUnknown Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801186968, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 18752438, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18673717, "index_size": 43868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33797, "raw_key_size": 361166, "raw_average_key_size": 26, "raw_value_size": 18442396, "raw_average_value_size": 1366, "num_data_blocks": 1655, "num_entries": 13501, "num_filter_entries": 13501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.187367) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 18752438 bytes Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.189061) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.1 rd, 168.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 16.0 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(16.3) write-amplify(8.1) OK, records in: 14062, records dropped: 561 output_compression: NoCompression Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.189093) EVENT_LOG_v1 {"time_micros": 1764929801189079, "job": 36, "event": "compaction_finished", "compaction_time_micros": 111559, "compaction_time_cpu_micros": 40993, "output_level": 6, "num_output_files": 1, "total_output_size": 18752438, "num_input_records": 14062, "num_output_records": 13501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801189630, "job": 36, "event": "table_file_deletion", "file_number": 62} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929801192101, "job": 36, "event": "table_file_deletion", "file_number": 60} Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.075323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.192163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.192168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.192170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.192171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:16:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:16:41.192173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:16:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 33 KiB/s wr, 37 op/s Dec 5 05:16:41 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "format": "json"}]: dispatch Dec 5 05:16:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:41 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '33e10816-9bfc-48f9-9aa2-4d50e3101227' of type subvolume Dec 5 05:16:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:41.607+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '33e10816-9bfc-48f9-9aa2-4d50e3101227' of type subvolume Dec 5 05:16:41 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "33e10816-9bfc-48f9-9aa2-4d50e3101227", "force": true, "format": "json"}]: dispatch Dec 5 05:16:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, vol_name:cephfs) < "" Dec 5 05:16:41 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/33e10816-9bfc-48f9-9aa2-4d50e3101227'' moved to trashcan Dec 5 05:16:41 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:33e10816-9bfc-48f9-9aa2-4d50e3101227, vol_name:cephfs) < "" Dec 5 05:16:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:16:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:16:43 localhost podman[324031]: 2025-12-05 10:16:43.213384663 +0000 UTC m=+0.089780239 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 5 05:16:43 localhost systemd[1]: tmp-crun.lDoWNp.mount: Deactivated successfully. Dec 5 05:16:43 localhost podman[324030]: 2025-12-05 10:16:43.26814189 +0000 UTC m=+0.146093025 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 5 05:16:43 localhost podman[324031]: 2025-12-05 10:16:43.281103277 +0000 UTC m=+0.157498903 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc.) Dec 5 05:16:43 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:16:43 localhost podman[324030]: 2025-12-05 10:16:43.335904705 +0000 UTC m=+0.213855810 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:16:43 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:16:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s Dec 5 05:16:44 localhost nova_compute[280228]: 2025-12-05 10:16:44.426 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:16:45 Dec 5 05:16:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:16:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:16:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'backups', 'images', 'vms', 'volumes', '.mgr'] Dec 5 05:16:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:16:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 32 KiB/s wr, 36 op/s Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:16:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0005722514482132887 of space, bias 4.0, pg target 0.45551215277777785 quantized to 16 (current 16) Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:16:45 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:01a37854-0dc7-41dc-a61a-5647fabb505d, vol_name:cephfs) < "" Dec 5 05:16:45 localhost nova_compute[280228]: 2025-12-05 10:16:45.726 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/01a37854-0dc7-41dc-a61a-5647fabb505d/.meta.tmp' Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/01a37854-0dc7-41dc-a61a-5647fabb505d/.meta.tmp' to config b'/volumes/_nogroup/01a37854-0dc7-41dc-a61a-5647fabb505d/.meta' Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:01a37854-0dc7-41dc-a61a-5647fabb505d, vol_name:cephfs) < "" Dec 5 05:16:45 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "format": "json"}]: dispatch Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:01a37854-0dc7-41dc-a61a-5647fabb505d, vol_name:cephfs) < "" Dec 5 05:16:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:01a37854-0dc7-41dc-a61a-5647fabb505d, vol_name:cephfs) < "" Dec 5 05:16:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 53 KiB/s wr, 35 op/s Dec 5 05:16:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e234 do_prune osdmap full prune enabled Dec 5 05:16:47 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e235 e235: 6 total, 6 up, 6 in Dec 5 05:16:47 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e235: 6 total, 6 up, 6 in Dec 5 05:16:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:16:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/.meta.tmp' Dec 5 05:16:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/.meta.tmp' to config b'/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/.meta' Dec 5 05:16:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:16:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "format": "json"}]: dispatch Dec 5 05:16:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:16:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:16:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:48 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:49 localhost nova_compute[280228]: 2025-12-05 10:16:49.429 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 9.9 KiB/s rd, 30 KiB/s wr, 16 op/s Dec 5 05:16:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e235 do_prune osdmap full prune enabled Dec 5 05:16:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e236 e236: 6 total, 6 up, 6 in Dec 5 05:16:49 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e236: 6 total, 6 up, 6 in Dec 5 05:16:49 localhost podman[239519]: time="2025-12-05T10:16:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:16:49 localhost podman[239519]: @ - - [05/Dec/2025:10:16:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:16:49 localhost podman[239519]: @ - - [05/Dec/2025:10:16:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19279 "" "Go-http-client/1.1" Dec 5 05:16:50 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "format": "json"}]: dispatch Dec 5 05:16:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:01a37854-0dc7-41dc-a61a-5647fabb505d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:01a37854-0dc7-41dc-a61a-5647fabb505d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:50 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '01a37854-0dc7-41dc-a61a-5647fabb505d' of type subvolume Dec 5 05:16:50 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:50.095+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '01a37854-0dc7-41dc-a61a-5647fabb505d' of type subvolume Dec 5 05:16:50 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "01a37854-0dc7-41dc-a61a-5647fabb505d", "force": true, "format": "json"}]: dispatch Dec 5 05:16:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:01a37854-0dc7-41dc-a61a-5647fabb505d, vol_name:cephfs) < "" Dec 5 05:16:50 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/01a37854-0dc7-41dc-a61a-5647fabb505d'' moved to trashcan Dec 5 05:16:50 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:01a37854-0dc7-41dc-a61a-5647fabb505d, vol_name:cephfs) < "" Dec 5 05:16:50 localhost nova_compute[280228]: 2025-12-05 10:16:50.555 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:50 localhost nova_compute[280228]: 2025-12-05 10:16:50.758 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 60 KiB/s wr, 49 op/s Dec 5 05:16:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:16:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/.meta.tmp' Dec 5 05:16:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/.meta.tmp' to config b'/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/.meta' Dec 5 05:16:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "format": "json"}]: dispatch Dec 5 05:16:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:16:52 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:16:52 localhost nova_compute[280228]: 2025-12-05 10:16:52.610 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:16:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:16:53 localhost systemd[1]: tmp-crun.8FI9an.mount: Deactivated successfully. Dec 5 05:16:53 localhost podman[324069]: 2025-12-05 10:16:53.187422577 +0000 UTC m=+0.072094978 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 5 05:16:53 localhost podman[324068]: 2025-12-05 10:16:53.201162647 +0000 UTC m=+0.087314194 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:16:53 localhost podman[324069]: 2025-12-05 10:16:53.215362372 +0000 UTC m=+0.100034763 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:16:53 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:16:53 localhost podman[324068]: 2025-12-05 10:16:53.23227216 +0000 UTC m=+0.118423737 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:16:53 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:16:53 localhost podman[324072]: 2025-12-05 10:16:53.324072751 +0000 UTC m=+0.200174580 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:16:53 localhost podman[324072]: 2025-12-05 10:16:53.334294444 +0000 UTC m=+0.210396273 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 5 05:16:53 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:16:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 60 KiB/s wr, 50 op/s Dec 5 05:16:54 localhost nova_compute[280228]: 2025-12-05 10:16:54.431 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:16:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:16:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:16:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:16:55 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:16:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:16:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:16:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:16:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 25 KiB/s wr, 45 op/s Dec 5 05:16:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:16:55 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:16:55 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:16:55 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e", "osd", "allow rw pool=manila_data namespace=fsvolumens_e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:16:55 localhost nova_compute[280228]: 2025-12-05 10:16:55.791 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:16:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e236 do_prune osdmap full prune enabled Dec 5 05:16:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e237 e237: 6 total, 6 up, 6 in Dec 5 05:16:56 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e237: 6 total, 6 up, 6 in Dec 5 05:16:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea920704-3133-4d40-a979-346396e08bfd", "format": "json"}]: dispatch Dec 5 05:16:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea920704-3133-4d40-a979-346396e08bfd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea920704-3133-4d40-a979-346396e08bfd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:56 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea920704-3133-4d40-a979-346396e08bfd' of type subvolume Dec 5 05:16:56 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:56.117+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea920704-3133-4d40-a979-346396e08bfd' of type subvolume Dec 5 05:16:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea920704-3133-4d40-a979-346396e08bfd", "force": true, "format": "json"}]: dispatch Dec 5 05:16:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea920704-3133-4d40-a979-346396e08bfd, vol_name:cephfs) < "" Dec 5 05:16:56 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea920704-3133-4d40-a979-346396e08bfd'' moved to trashcan Dec 5 05:16:56 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea920704-3133-4d40-a979-346396e08bfd, vol_name:cephfs) < "" Dec 5 05:16:57 localhost openstack_network_exporter[241668]: ERROR 10:16:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:16:57 localhost openstack_network_exporter[241668]: ERROR 10:16:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:16:57 localhost openstack_network_exporter[241668]: ERROR 10:16:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:16:57 localhost openstack_network_exporter[241668]: ERROR 10:16:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:16:57 localhost openstack_network_exporter[241668]: Dec 5 05:16:57 localhost openstack_network_exporter[241668]: ERROR 10:16:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:16:57 localhost openstack_network_exporter[241668]: Dec 5 05:16:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 78 KiB/s wr, 60 op/s Dec 5 05:16:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:16:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:16:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:16:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:16:58 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca/5e4c632f-fa60-41da-a9f7-9e777915367e Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "format": "json"}]: dispatch Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03f250f-e45a-4ebf-8ff8-90d739fc49ca' of type subvolume Dec 5 05:16:58 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:16:58.948+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e03f250f-e45a-4ebf-8ff8-90d739fc49ca' of type subvolume Dec 5 05:16:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03f250f-e45a-4ebf-8ff8-90d739fc49ca", "force": true, "format": "json"}]: dispatch Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e03f250f-e45a-4ebf-8ff8-90d739fc49ca'' moved to trashcan Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:16:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e03f250f-e45a-4ebf-8ff8-90d739fc49ca, vol_name:cephfs) < "" Dec 5 05:16:59 localhost nova_compute[280228]: 2025-12-05 10:16:59.435 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:16:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 63 KiB/s wr, 49 op/s Dec 5 05:16:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:16:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:16:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:00 localhost nova_compute[280228]: 2025-12-05 10:17:00.830 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 239 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 74 op/s Dec 5 05:17:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:17:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:01 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/.meta.tmp' Dec 5 05:17:01 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/.meta.tmp' to config b'/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/.meta' Dec 5 05:17:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "format": "json"}]: dispatch Dec 5 05:17:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:17:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:17:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e237 do_prune osdmap full prune enabled Dec 5 05:17:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e238 e238: 6 total, 6 up, 6 in Dec 5 05:17:02 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e238: 6 total, 6 up, 6 in Dec 5 05:17:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:17:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:17:02 localhost systemd[1]: tmp-crun.iBfHHW.mount: Deactivated successfully. Dec 5 05:17:02 localhost podman[324130]: 2025-12-05 10:17:02.203009367 +0000 UTC m=+0.087312204 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 5 05:17:02 localhost podman[324131]: 2025-12-05 10:17:02.187839762 +0000 UTC m=+0.069750497 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:17:02 localhost podman[324131]: 2025-12-05 10:17:02.266912774 +0000 UTC m=+0.148823489 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:17:02 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:17:02 localhost podman[324130]: 2025-12-05 10:17:02.318164203 +0000 UTC m=+0.202466930 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:17:02 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.875150) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822875181, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 597, "num_deletes": 252, "total_data_size": 348094, "memory_usage": 359464, "flush_reason": "Manual Compaction"} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822879600, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 340806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35832, "largest_seqno": 36428, "table_properties": {"data_size": 337730, "index_size": 995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8262, "raw_average_key_size": 20, "raw_value_size": 331153, "raw_average_value_size": 821, "num_data_blocks": 43, "num_entries": 403, "num_filter_entries": 403, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929801, "oldest_key_time": 1764929801, "file_creation_time": 1764929822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4482 microseconds, and 1343 cpu microseconds. Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.879632) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 340806 bytes OK Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.879651) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.881131) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.881145) EVENT_LOG_v1 {"time_micros": 1764929822881141, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.881161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 344719, prev total WAL file size 344719, number of live WAL files 2. Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.881538) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(332KB)], [63(17MB)] Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822881567, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 19093244, "oldest_snapshot_seqno": -1} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 13380 keys, 17803491 bytes, temperature: kUnknown Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822957419, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 17803491, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17726780, "index_size": 42178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 359285, "raw_average_key_size": 26, "raw_value_size": 17498767, "raw_average_value_size": 1307, "num_data_blocks": 1577, "num_entries": 13380, "num_filter_entries": 13380, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.957857) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 17803491 bytes Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.959954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.2 rd, 234.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 17.9 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(108.3) write-amplify(52.2) OK, records in: 13904, records dropped: 524 output_compression: NoCompression Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.960018) EVENT_LOG_v1 {"time_micros": 1764929822959998, "job": 38, "event": "compaction_finished", "compaction_time_micros": 76004, "compaction_time_cpu_micros": 30865, "output_level": 6, "num_output_files": 1, "total_output_size": 17803491, "num_input_records": 13904, "num_output_records": 13380, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822960338, "job": 38, "event": "table_file_deletion", "file_number": 65} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929822962920, "job": 38, "event": "table_file_deletion", "file_number": 63} Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.881485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.962966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.962971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.962972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.962974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:17:02 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:17:02.962975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:17:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e238 do_prune osdmap full prune enabled Dec 5 05:17:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e239 e239: 6 total, 6 up, 6 in Dec 5 05:17:03 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e239: 6 total, 6 up, 6 in Dec 5 05:17:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 98 op/s Dec 5 05:17:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:17:03.922 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:17:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:17:03.923 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:17:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:17:03.924 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:17:04 localhost nova_compute[280228]: 2025-12-05 10:17:04.438 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:05 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:17:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:05 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:05 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3", "osd", "allow rw pool=manila_data namespace=fsvolumens_87604c98-95ac-4fbd-9129-c8a4a3776866", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 2.7 MiB/s wr, 84 op/s Dec 5 05:17:05 localhost nova_compute[280228]: 2025-12-05 10:17:05.877 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:17:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2175213793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:17:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 5.4 MiB/s wr, 148 op/s Dec 5 05:17:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e239 do_prune osdmap full prune enabled Dec 5 05:17:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e240 e240: 6 total, 6 up, 6 in Dec 5 05:17:08 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e240: 6 total, 6 up, 6 in Dec 5 05:17:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:17:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:08 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866/1ae02bd3-e301-4c48-8e82-c476a0edaac3 Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "format": "json"}]: dispatch Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:87604c98-95ac-4fbd-9129-c8a4a3776866, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:87604c98-95ac-4fbd-9129-c8a4a3776866, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:17:08.851+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87604c98-95ac-4fbd-9129-c8a4a3776866' of type subvolume Dec 5 05:17:08 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87604c98-95ac-4fbd-9129-c8a4a3776866' of type subvolume Dec 5 05:17:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "87604c98-95ac-4fbd-9129-c8a4a3776866", "force": true, "format": "json"}]: dispatch Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/87604c98-95ac-4fbd-9129-c8a4a3776866'' moved to trashcan Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:17:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87604c98-95ac-4fbd-9129-c8a4a3776866, vol_name:cephfs) < "" Dec 5 05:17:09 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:09 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:09 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:09 localhost nova_compute[280228]: 2025-12-05 10:17:09.441 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 69 op/s Dec 5 05:17:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e240 do_prune osdmap full prune enabled Dec 5 05:17:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e241 e241: 6 total, 6 up, 6 in Dec 5 05:17:09 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e241: 6 total, 6 up, 6 in Dec 5 05:17:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:17:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2439844479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:17:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:17:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2439844479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:17:10 localhost nova_compute[280228]: 2025-12-05 10:17:10.905 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:10 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:17:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 5.3 MiB/s rd, 5.0 MiB/s wr, 153 op/s Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7b2c6949-7a2d-43e1-b721-1a9688202f80/.meta.tmp' Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7b2c6949-7a2d-43e1-b721-1a9688202f80/.meta.tmp' to config b'/volumes/_nogroup/7b2c6949-7a2d-43e1-b721-1a9688202f80/.meta' Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "format": "json"}]: dispatch Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:17:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:17:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/.meta.tmp' Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/.meta.tmp' to config b'/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/.meta' Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "format": "json"}]: dispatch Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:17:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:17:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e241 do_prune osdmap full prune enabled Dec 5 05:17:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e242 e242: 6 total, 6 up, 6 in Dec 5 05:17:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e242: 6 total, 6 up, 6 in Dec 5 05:17:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 121 op/s Dec 5 05:17:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:17:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:17:14 localhost podman[324200]: 2025-12-05 10:17:14.004418079 +0000 UTC m=+0.072951495 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350) Dec 5 05:17:14 localhost podman[324200]: 2025-12-05 10:17:14.04166851 +0000 UTC m=+0.110201896 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter) Dec 5 05:17:14 localhost podman[324199]: 2025-12-05 10:17:14.054588755 +0000 UTC m=+0.129984451 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 5 05:17:14 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:17:14 localhost podman[324199]: 2025-12-05 10:17:14.097055075 +0000 UTC m=+0.172450721 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd) Dec 5 05:17:14 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e242 do_prune osdmap full prune enabled Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e243 e243: 6 total, 6 up, 6 in Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e243: 6 total, 6 up, 6 in Dec 5 05:17:14 localhost nova_compute[280228]: 2025-12-05 10:17:14.444 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "format": "json"}]: dispatch Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:17:14.838+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b2c6949-7a2d-43e1-b721-1a9688202f80' of type subvolume Dec 5 05:17:14 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7b2c6949-7a2d-43e1-b721-1a9688202f80' of type subvolume Dec 5 05:17:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7b2c6949-7a2d-43e1-b721-1a9688202f80", "force": true, "format": "json"}]: dispatch Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, vol_name:cephfs) < "" Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7b2c6949-7a2d-43e1-b721-1a9688202f80'' moved to trashcan Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7b2c6949-7a2d-43e1-b721-1a9688202f80, vol_name:cephfs) < "" Dec 5 05:17:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:17:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:14 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:17:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:15 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d", "osd", "allow rw pool=manila_data namespace=fsvolumens_7cb70929-0aaf-4a14-a66d-91468bfe4b75", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 294 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 121 op/s Dec 5 05:17:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:17:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:17:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:17:15 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:17:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:17:15 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:15 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 66034f42-d53a-46fe-aa69-826ce029bc76 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:17:15 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 66034f42-d53a-46fe-aa69-826ce029bc76 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:17:15 localhost ceph-mgr[286454]: [progress INFO root] Completed event 66034f42-d53a-46fe-aa69-826ce029bc76 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:17:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:17:15 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:17:15 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:17:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:17:15 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:15 localhost nova_compute[280228]: 2025-12-05 10:17:15.948 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:17:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e243 do_prune osdmap full prune enabled Dec 5 05:17:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e244 e244: 6 total, 6 up, 6 in Dec 5 05:17:16 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e244: 6 total, 6 up, 6 in Dec 5 05:17:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:17:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:17:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e52: np0005546419.zhsnqq(active, since 15m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:17:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:17:17.383 2 INFO neutron.agent.securitygroups_rpc [None req-2a5fe3a0-7b5a-4c57-ac4a-29ef69a28174 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group rule updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']#033[00m Dec 5 05:17:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 144 KiB/s rd, 590 KiB/s wr, 90 op/s Dec 5 05:17:17 localhost neutron_sriov_agent[254996]: 2025-12-05 10:17:17.570 2 INFO neutron.agent.securitygroups_rpc [None req-7838533a-af27-4a68-8cd9-2dbd66b75065 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group rule updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']#033[00m Dec 5 05:17:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:17:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75/8845d7b0-73a1-433b-8965-fdf32cd8dc5d Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "format": "json"}]: dispatch Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:17:18.468+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7cb70929-0aaf-4a14-a66d-91468bfe4b75' of type subvolume Dec 5 05:17:18 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7cb70929-0aaf-4a14-a66d-91468bfe4b75' of type subvolume Dec 5 05:17:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7cb70929-0aaf-4a14-a66d-91468bfe4b75", "force": true, "format": "json"}]: dispatch Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7cb70929-0aaf-4a14-a66d-91468bfe4b75'' moved to trashcan Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:17:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7cb70929-0aaf-4a14-a66d-91468bfe4b75, vol_name:cephfs) < "" Dec 5 05:17:19 localhost nova_compute[280228]: 2025-12-05 10:17:19.448 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 65 KiB/s wr, 73 op/s Dec 5 05:17:19 localhost podman[239519]: time="2025-12-05T10:17:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:17:19 localhost podman[239519]: @ - - [05/Dec/2025:10:17:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:17:19 localhost podman[239519]: @ - - [05/Dec/2025:10:17:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19274 "" "Go-http-client/1.1" Dec 5 05:17:20 localhost nova_compute[280228]: 2025-12-05 10:17:20.989 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e244 do_prune osdmap full prune enabled Dec 5 05:17:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 e245: 6 total, 6 up, 6 in Dec 5 05:17:21 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e245: 6 total, 6 up, 6 in Dec 5 05:17:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:21 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:17:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 103 KiB/s wr, 78 op/s Dec 5 05:17:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:22 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:22 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:22 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:22 localhost nova_compute[280228]: 2025-12-05 10:17:22.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:22 localhost nova_compute[280228]: 2025-12-05 10:17:22.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:17:22 localhost nova_compute[280228]: 2025-12-05 10:17:22.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:17:22 localhost nova_compute[280228]: 2025-12-05 10:17:22.529 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:17:22 localhost nova_compute[280228]: 2025-12-05 10:17:22.529 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:17:22 localhost nova_compute[280228]: 2025-12-05 10:17:22.529 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:17:22 localhost ovn_controller[153000]: 2025-12-05T10:17:22Z|00382|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 5 05:17:22 localhost neutron_sriov_agent[254996]: 2025-12-05 10:17:22.958 2 INFO neutron.agent.securitygroups_rpc [req-883c4d7b-47a9-4234-a8cb-46707ff21e47 req-1c8366a0-660d-41e6-a45d-0d8b43d7ab6d 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group member updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']#033[00m Dec 5 05:17:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:17:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2254812203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.037 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.109 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.110 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.332 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.334 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11078MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.335 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.335 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.426 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.427 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.428 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:17:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 93 KiB/s wr, 70 op/s Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.483 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:17:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:17:23 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2669198364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.935 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.943 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.963 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.966 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:17:23 localhost nova_compute[280228]: 2025-12-05 10:17:23.966 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:17:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:17:24 localhost podman[324409]: 2025-12-05 10:17:24.196656064 +0000 UTC m=+0.077309848 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:17:24 localhost systemd[1]: tmp-crun.i0jgnu.mount: Deactivated successfully. Dec 5 05:17:24 localhost podman[324410]: 2025-12-05 10:17:24.21417034 +0000 UTC m=+0.088441189 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:17:24 localhost podman[324410]: 2025-12-05 10:17:24.248732368 +0000 UTC m=+0.123003247 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:17:24 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:17:24 localhost podman[324409]: 2025-12-05 10:17:24.290792396 +0000 UTC m=+0.171446210 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:17:24 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:17:24 localhost podman[324411]: 2025-12-05 10:17:24.334041711 +0000 UTC m=+0.203638337 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:17:24 localhost podman[324411]: 2025-12-05 10:17:24.348852324 +0000 UTC m=+0.218448950 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125) Dec 5 05:17:24 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:17:24 localhost nova_compute[280228]: 2025-12-05 10:17:24.451 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:17:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083 Dec 5 05:17:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:24 localhost nova_compute[280228]: 2025-12-05 10:17:24.968 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:24 localhost nova_compute[280228]: 2025-12-05 10:17:24.991 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:25 localhost systemd[1]: tmp-crun.ou4Yue.mount: Deactivated successfully. Dec 5 05:17:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 202 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 79 KiB/s wr, 58 op/s Dec 5 05:17:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:26 localhost nova_compute[280228]: 2025-12-05 10:17:26.012 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:17:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/.meta.tmp' Dec 5 05:17:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/.meta.tmp' to config b'/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/.meta' Dec 5 05:17:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "format": "json"}]: dispatch Dec 5 05:17:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:17:26 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:17:27 localhost openstack_network_exporter[241668]: ERROR 10:17:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:17:27 localhost openstack_network_exporter[241668]: ERROR 10:17:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:17:27 localhost openstack_network_exporter[241668]: ERROR 10:17:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:17:27 localhost openstack_network_exporter[241668]: ERROR 10:17:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:17:27 localhost openstack_network_exporter[241668]: Dec 5 05:17:27 localhost openstack_network_exporter[241668]: ERROR 10:17:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:17:27 localhost openstack_network_exporter[241668]: Dec 5 05:17:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 43 op/s Dec 5 05:17:27 localhost nova_compute[280228]: 2025-12-05 10:17:27.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:28 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:17:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:17:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.700 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.700 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.701 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:17:28 localhost nova_compute[280228]: 2025-12-05 10:17:28.701 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:17:29 localhost nova_compute[280228]: 2025-12-05 10:17:29.453 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 MiB/s wr, 43 op/s Dec 5 05:17:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:17:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:29 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:17:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:29 localhost nova_compute[280228]: 2025-12-05 10:17:29.958 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:17:29 localhost nova_compute[280228]: 2025-12-05 10:17:29.980 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:17:29 localhost nova_compute[280228]: 2025-12-05 10:17:29.981 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:17:29 localhost nova_compute[280228]: 2025-12-05 10:17:29.982 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:31 localhost nova_compute[280228]: 2025-12-05 10:17:31.062 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 95 op/s Dec 5 05:17:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:31 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:17:31 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:31 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:31 localhost nova_compute[280228]: 2025-12-05 10:17:31.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:31 localhost nova_compute[280228]: 2025-12-05 10:17:31.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:31 localhost nova_compute[280228]: 2025-12-05 10:17:31.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:17:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083 Dec 5 05:17:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:31 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:31 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:31 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:17:32.366 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:17:32 localhost nova_compute[280228]: 2025-12-05 10:17:32.367 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:17:32.368 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:17:32 localhost nova_compute[280228]: 2025-12-05 10:17:32.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:17:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:17:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:17:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:17:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:17:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:17:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:17:33 localhost podman[324472]: 2025-12-05 10:17:33.216540424 +0000 UTC m=+0.101656163 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:17:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:33 localhost podman[324472]: 2025-12-05 10:17:33.258680626 +0000 UTC m=+0.143796405 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 05:17:33 localhost podman[324473]: 2025-12-05 10:17:33.269409003 +0000 UTC m=+0.148343343 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:17:33 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:17:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:17:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:17:33 localhost podman[324473]: 2025-12-05 10:17:33.28170456 +0000 UTC m=+0.160638860 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:17:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:33 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:17:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 108 op/s Dec 5 05:17:33 localhost nova_compute[280228]: 2025-12-05 10:17:33.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:17:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:17:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:17:34 localhost nova_compute[280228]: 2025-12-05 10:17:34.456 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:34 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:34 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:17:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 107 op/s Dec 5 05:17:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:36 localhost nova_compute[280228]: 2025-12-05 10:17:36.107 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:17:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:17:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:36 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:17:36 localhost ovn_metadata_agent[158815]: 2025-12-05 10:17:36.370 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:17:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 250 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 112 op/s Dec 5 05:17:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:38 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:17:38 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:38 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083 Dec 5 05:17:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:38 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:39 localhost nova_compute[280228]: 2025-12-05 10:17:39.459 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 250 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 96 KiB/s wr, 79 op/s Dec 5 05:17:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:17:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:17:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:17:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:17:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:17:39 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:17:39 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:17:39 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:17:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:17:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:17:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:41 localhost nova_compute[280228]: 2025-12-05 10:17:41.144 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:41 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "tenant_id": "702f5d76a7514945a7e621e4e93fb7f0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:41 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:41 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-2094750145 with tenant 702f5d76a7514945a7e621e4e93fb7f0 Dec 5 05:17:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:41 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:41 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 263 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 103 op/s Dec 5 05:17:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume authorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, tenant_id:702f5d76a7514945a7e621e4e93fb7f0, vol_name:cephfs) < "" Dec 5 05:17:42 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:42 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:42 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2094750145", "caps": ["mds", "allow rw path=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083", "osd", "allow rw pool=manila_data namespace=fsvolumens_935cda55-64f9-4813-ba6c-3b58541848f7", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:43 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:17:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:43 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:17:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 271 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 898 KiB/s rd, 2.1 MiB/s wr, 62 op/s Dec 5 05:17:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:17:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:17:44 localhost podman[324524]: 2025-12-05 10:17:44.202135689 +0000 UTC m=+0.094675330 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:17:44 localhost podman[324524]: 2025-12-05 10:17:44.246756085 +0000 UTC m=+0.139295716 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, architecture=x86_64, distribution-scope=public, config_id=edpm, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41) Dec 5 05:17:44 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:17:44 localhost systemd[1]: tmp-crun.SrIDGD.mount: Deactivated successfully. Dec 5 05:17:44 localhost podman[324542]: 2025-12-05 10:17:44.312024813 +0000 UTC m=+0.097045721 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:17:44 localhost podman[324542]: 2025-12-05 10:17:44.325529837 +0000 UTC m=+0.110550505 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:17:44 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:17:44 localhost nova_compute[280228]: 2025-12-05 10:17:44.464 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} v 0) Dec 5 05:17:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} v 0) Dec 5 05:17:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:44 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume deauthorize, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "auth_id": "tempest-cephx-id-2094750145", "format": "json"}]: dispatch Dec 5 05:17:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2094750145, client_metadata.root=/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7/a15e2ccd-4655-4790-b1ce-80f94e72d083 Dec 5 05:17:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2094750145, format:json, prefix:fs subvolume evict, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:17:45 Dec 5 05:17:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:17:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:17:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['vms', 'manila_data', 'volumes', '.mgr', 'images', 'manila_metadata', 'backups'] Dec 5 05:17:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:17:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:17:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:17:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:17:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:17:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 271 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 34 op/s Dec 5 05:17:45 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2094750145", "format": "json"} : dispatch Dec 5 05:17:45 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"} : dispatch Dec 5 05:17:45 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2094750145"}]': finished Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006178261824167132 of space, bias 1.0, pg target 1.2356523648334263 quantized to 32 (current 32) Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014866541910943606 of space, bias 1.0, pg target 0.29584418402777773 quantized to 32 (current 32) Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.4071718546435884e-05 quantized to 32 (current 32) Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.453674623115578e-06 of space, bias 1.0, pg target 0.00048664546691792296 quantized to 32 (current 32) Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:17:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0008509707100316397 of space, bias 4.0, pg target 0.6751034299584341 quantized to 16 (current 16) Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:17:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:17:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:46 localhost nova_compute[280228]: 2025-12-05 10:17:46.210 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:17:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:17:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:17:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:17:46 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:46 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 361 KiB/s rd, 2.3 MiB/s wr, 80 op/s Dec 5 05:17:47 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:17:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "format": "json"}]: dispatch Dec 5 05:17:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:935cda55-64f9-4813-ba6c-3b58541848f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:935cda55-64f9-4813-ba6c-3b58541848f7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:48 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:17:48.510+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '935cda55-64f9-4813-ba6c-3b58541848f7' of type subvolume Dec 5 05:17:48 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '935cda55-64f9-4813-ba6c-3b58541848f7' of type subvolume Dec 5 05:17:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "935cda55-64f9-4813-ba6c-3b58541848f7", "force": true, "format": "json"}]: dispatch Dec 5 05:17:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/935cda55-64f9-4813-ba6c-3b58541848f7'' moved to trashcan Dec 5 05:17:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:17:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:935cda55-64f9-4813-ba6c-3b58541848f7, vol_name:cephfs) < "" Dec 5 05:17:49 localhost nova_compute[280228]: 2025-12-05 10:17:49.464 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 361 KiB/s rd, 2.2 MiB/s wr, 76 op/s Dec 5 05:17:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:17:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:17:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:49 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:17:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:49 localhost podman[239519]: time="2025-12-05T10:17:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:17:49 localhost podman[239519]: @ - - [05/Dec/2025:10:17:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:17:49 localhost podman[239519]: @ - - [05/Dec/2025:10:17:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19273 "" "Go-http-client/1.1" Dec 5 05:17:50 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch Dec 5 05:17:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:50 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch Dec 5 05:17:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, vol_name:cephfs) < "" Dec 5 05:17:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, vol_name:cephfs) < "" Dec 5 05:17:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:17:50 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:17:50 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:50 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:50 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:51 localhost nova_compute[280228]: 2025-12-05 10:17:51.267 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 362 KiB/s rd, 2.3 MiB/s wr, 80 op/s Dec 5 05:17:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "format": "json"}]: dispatch Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d483e3c7-2e03-4510-b92a-d90a49e04bba", "force": true, "format": "json"}]: dispatch Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, vol_name:cephfs) < "" Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d483e3c7-2e03-4510-b92a-d90a49e04bba'' moved to trashcan Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d483e3c7-2e03-4510-b92a-d90a49e04bba, vol_name:cephfs) < "" Dec 5 05:17:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:17:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:17:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:17:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:17:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:17:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:53 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:17:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:53 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:17:53 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 331 KiB/s rd, 1.1 MiB/s wr, 56 op/s Dec 5 05:17:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:17:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:17:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:17:54 localhost nova_compute[280228]: 2025-12-05 10:17:54.468 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:17:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:17:55 localhost podman[324565]: 2025-12-05 10:17:55.214278887 +0000 UTC m=+0.100389784 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:17:55 localhost podman[324566]: 2025-12-05 10:17:55.266919059 +0000 UTC m=+0.146779955 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:17:55 localhost podman[324565]: 2025-12-05 10:17:55.298065393 +0000 UTC m=+0.184176290 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:17:55 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:17:55 localhost podman[324566]: 2025-12-05 10:17:55.353072546 +0000 UTC m=+0.232933472 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:17:55 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:17:55 localhost podman[324567]: 2025-12-05 10:17:55.371976616 +0000 UTC m=+0.249765569 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:17:55 localhost podman[324567]: 2025-12-05 10:17:55.382398525 +0000 UTC m=+0.260187458 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:17:55 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:17:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e_9eff7ce6-e5ba-47c0-8e34-473dc31708f3", "force": true, "format": "json"}]: dispatch Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e_9eff7ce6-e5ba-47c0-8e34-473dc31708f3, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta' Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e_9eff7ce6-e5ba-47c0-8e34-473dc31708f3, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:17:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "snap_name": "aa63b329-df82-4a19-aa39-a051e214eb1e", "force": true, "format": "json"}]: dispatch Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:17:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 283 KiB/s rd, 270 KiB/s wr, 51 op/s Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta.tmp' to config b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb/.meta' Dec 5 05:17:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aa63b329-df82-4a19-aa39-a051e214eb1e, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:17:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:17:56 localhost nova_compute[280228]: 2025-12-05 10:17:56.306 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:17:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:17:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:17:56 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:17:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:17:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:17:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:17:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:17:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:17:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e245 do_prune osdmap full prune enabled Dec 5 05:17:57 localhost openstack_network_exporter[241668]: ERROR 10:17:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:17:57 localhost openstack_network_exporter[241668]: ERROR 10:17:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:17:57 localhost openstack_network_exporter[241668]: ERROR 10:17:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:17:57 localhost openstack_network_exporter[241668]: ERROR 10:17:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:17:57 localhost openstack_network_exporter[241668]: Dec 5 05:17:57 localhost openstack_network_exporter[241668]: ERROR 10:17:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:17:57 localhost openstack_network_exporter[241668]: Dec 5 05:17:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e246 e246: 6 total, 6 up, 6 in Dec 5 05:17:57 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e246: 6 total, 6 up, 6 in Dec 5 05:17:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 97 KiB/s wr, 10 op/s Dec 5 05:17:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "63723774-424e-4107-9f82-8c494eaae0eb", "format": "json"}]: dispatch Dec 5 05:17:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:63723774-424e-4107-9f82-8c494eaae0eb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:63723774-424e-4107-9f82-8c494eaae0eb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:17:58 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:17:58.681+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '63723774-424e-4107-9f82-8c494eaae0eb' of type subvolume Dec 5 05:17:58 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '63723774-424e-4107-9f82-8c494eaae0eb' of type subvolume Dec 5 05:17:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "63723774-424e-4107-9f82-8c494eaae0eb", "force": true, "format": "json"}]: dispatch Dec 5 05:17:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:17:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/63723774-424e-4107-9f82-8c494eaae0eb'' moved to trashcan Dec 5 05:17:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:17:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:63723774-424e-4107-9f82-8c494eaae0eb, vol_name:cephfs) < "" Dec 5 05:17:59 localhost ovn_controller[153000]: 2025-12-05T10:17:59Z|00383|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 5 05:17:59 localhost nova_compute[280228]: 2025-12-05 10:17:59.472 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:17:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 284 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 97 KiB/s wr, 10 op/s Dec 5 05:17:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:17:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:17:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:17:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:17:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:17:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:17:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:17:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:17:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:17:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:17:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:01 localhost nova_compute[280228]: 2025-12-05 10:18:01.337 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 82 KiB/s wr, 9 op/s Dec 5 05:18:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:18:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:18:02 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:02 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:02 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:02 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:03 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:18:03 localhost systemd[1]: tmp-crun.bbdlgz.mount: Deactivated successfully. Dec 5 05:18:03 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:03.373 261902 INFO neutron.agent.linux.ip_lib [None req-9dff83f1-15e0-4abb-b611-6a09d1443395 - - - - - -] Device tapf211f6b1-fb cannot be used as it has no MAC address#033[00m Dec 5 05:18:03 localhost podman[324624]: 2025-12-05 10:18:03.371484656 +0000 UTC m=+0.067057734 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:18:03 localhost nova_compute[280228]: 2025-12-05 10:18:03.461 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:03 localhost podman[324626]: 2025-12-05 10:18:03.4637131 +0000 UTC m=+0.149969604 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:18:03 localhost kernel: device tapf211f6b1-fb entered promiscuous mode Dec 5 05:18:03 localhost NetworkManager[5960]: [1764929883.4699] manager: (tapf211f6b1-fb): new Generic device (/org/freedesktop/NetworkManager/Devices/62) Dec 5 05:18:03 localhost nova_compute[280228]: 2025-12-05 10:18:03.469 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:03 localhost ovn_controller[153000]: 2025-12-05T10:18:03Z|00384|binding|INFO|Claiming lport f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 for this chassis. Dec 5 05:18:03 localhost ovn_controller[153000]: 2025-12-05T10:18:03Z|00385|binding|INFO|f211f6b1-fbad-4ca8-be46-6b7dc3fa4582: Claiming unknown Dec 5 05:18:03 localhost systemd-udevd[324678]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:18:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 82 KiB/s wr, 9 op/s Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.492 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-dc0f3e29-bb12-4213-a01a-239435c4f0ff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc0f3e29-bb12-4213-a01a-239435c4f0ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '080c7340f1314c6a8594ab191f7cd011', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae731555-340d-429f-9e6e-542f49bcf292, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f211f6b1-fbad-4ca8-be46-6b7dc3fa4582) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.493 158820 INFO neutron.agent.ovn.metadata.agent [-] Port f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 in datapath dc0f3e29-bb12-4213-a01a-239435c4f0ff bound to our chassis#033[00m Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.494 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dc0f3e29-bb12-4213-a01a-239435c4f0ff or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.494 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[eba6ae99-4242-4f39-8436-99f2628b52be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:18:03 localhost ovn_controller[153000]: 2025-12-05T10:18:03Z|00386|binding|INFO|Setting lport f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 ovn-installed in OVS Dec 5 05:18:03 localhost ovn_controller[153000]: 2025-12-05T10:18:03Z|00387|binding|INFO|Setting lport f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 up in Southbound Dec 5 05:18:03 localhost nova_compute[280228]: 2025-12-05 10:18:03.496 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:03 localhost podman[324626]: 2025-12-05 10:18:03.498336619 +0000 UTC m=+0.184593103 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost podman[324624]: 2025-12-05 10:18:03.516389242 +0000 UTC m=+0.211962300 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 5 05:18:03 localhost nova_compute[280228]: 2025-12-05 10:18:03.519 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost journal[228791]: ethtool ioctl error on tapf211f6b1-fb: No such device Dec 5 05:18:03 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:18:03 localhost nova_compute[280228]: 2025-12-05 10:18:03.542 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.923 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.924 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:18:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:03.925 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:18:04 localhost podman[324753]: Dec 5 05:18:04 localhost podman[324753]: 2025-12-05 10:18:04.474784307 +0000 UTC m=+0.090723250 container create d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:18:04 localhost nova_compute[280228]: 2025-12-05 10:18:04.514 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:04 localhost systemd[1]: Started libpod-conmon-d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046.scope. Dec 5 05:18:04 localhost systemd[1]: tmp-crun.HzdwgC.mount: Deactivated successfully. Dec 5 05:18:04 localhost podman[324753]: 2025-12-05 10:18:04.435628148 +0000 UTC m=+0.051567111 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:18:04 localhost systemd[1]: Started libcrun container. Dec 5 05:18:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199083f31ce455f8d564ac6a61552160968f7a0b4a29cd0bce92f68c5eba39a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:18:04 localhost podman[324753]: 2025-12-05 10:18:04.555835258 +0000 UTC m=+0.171774191 container init d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 5 05:18:04 localhost podman[324753]: 2025-12-05 10:18:04.564398211 +0000 UTC m=+0.180337144 container start d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:18:04 localhost dnsmasq[324771]: started, version 2.85 cachesize 150 Dec 5 05:18:04 localhost dnsmasq[324771]: DNS service limited to local subnets Dec 5 05:18:04 localhost dnsmasq[324771]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:18:04 localhost dnsmasq[324771]: warning: no upstream servers configured Dec 5 05:18:04 localhost dnsmasq-dhcp[324771]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:18:04 localhost dnsmasq[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/addn_hosts - 0 addresses Dec 5 05:18:04 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/host Dec 5 05:18:04 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/opts Dec 5 05:18:04 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:04.759 261902 INFO neutron.agent.dhcp.agent [None req-9ef6dbbd-aaa5-467b-968a-a3d14f9af3be - - - - - -] DHCP configuration for ports {'e39ed28e-f5f0-4c30-8a93-eeb73d5a7d5b'} is completed#033[00m Dec 5 05:18:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 82 KiB/s wr, 9 op/s Dec 5 05:18:05 localhost nova_compute[280228]: 2025-12-05 10:18:05.800 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e246 do_prune osdmap full prune enabled Dec 5 05:18:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e247 e247: 6 total, 6 up, 6 in Dec 5 05:18:06 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e247: 6 total, 6 up, 6 in Dec 5 05:18:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:06.290 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:06Z, description=, device_id=74ad28f0-aef7-43f5-aefc-afa6e59b9c94, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9b68f2f4-549d-41ca-a4f4-2e59f560cd9e, ip_allocation=immediate, mac_address=fa:16:3e:ab:8d:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:01Z, description=, dns_domain=, id=dc0f3e29-bb12-4213-a01a-239435c4f0ff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1828366578-network, port_security_enabled=True, project_id=080c7340f1314c6a8594ab191f7cd011, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14432, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3498, status=ACTIVE, subnets=['533cc24c-4f19-4947-a41e-a45aa4de9409'], tags=[], tenant_id=080c7340f1314c6a8594ab191f7cd011, updated_at=2025-12-05T10:18:02Z, vlan_transparent=None, network_id=dc0f3e29-bb12-4213-a01a-239435c4f0ff, port_security_enabled=False, project_id=080c7340f1314c6a8594ab191f7cd011, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3515, status=DOWN, tags=[], tenant_id=080c7340f1314c6a8594ab191f7cd011, updated_at=2025-12-05T10:18:06Z on network dc0f3e29-bb12-4213-a01a-239435c4f0ff#033[00m Dec 5 05:18:06 localhost nova_compute[280228]: 2025-12-05 10:18:06.338 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:18:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:18:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:18:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:18:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:06 localhost dnsmasq[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/addn_hosts - 1 addresses Dec 5 05:18:06 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/host Dec 5 05:18:06 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/opts Dec 5 05:18:06 localhost podman[324789]: 2025-12-05 10:18:06.52401125 +0000 UTC m=+0.068147268 container kill d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:18:06 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:06 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:06 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:06 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:06.814 261902 INFO neutron.agent.dhcp.agent [None req-1f0f37f7-baf4-4ac9-9824-f277862daf00 - - - - - -] DHCP configuration for ports {'9b68f2f4-549d-41ca-a4f4-2e59f560cd9e'} is completed#033[00m Dec 5 05:18:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:07.415 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:06Z, description=, device_id=74ad28f0-aef7-43f5-aefc-afa6e59b9c94, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9b68f2f4-549d-41ca-a4f4-2e59f560cd9e, ip_allocation=immediate, mac_address=fa:16:3e:ab:8d:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:01Z, description=, dns_domain=, id=dc0f3e29-bb12-4213-a01a-239435c4f0ff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1828366578-network, port_security_enabled=True, project_id=080c7340f1314c6a8594ab191f7cd011, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14432, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3498, status=ACTIVE, subnets=['533cc24c-4f19-4947-a41e-a45aa4de9409'], tags=[], tenant_id=080c7340f1314c6a8594ab191f7cd011, updated_at=2025-12-05T10:18:02Z, vlan_transparent=None, network_id=dc0f3e29-bb12-4213-a01a-239435c4f0ff, port_security_enabled=False, project_id=080c7340f1314c6a8594ab191f7cd011, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3515, status=DOWN, tags=[], tenant_id=080c7340f1314c6a8594ab191f7cd011, updated_at=2025-12-05T10:18:06Z on network dc0f3e29-bb12-4213-a01a-239435c4f0ff#033[00m Dec 5 05:18:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 KiB/s rd, 76 KiB/s wr, 11 op/s Dec 5 05:18:07 localhost dnsmasq[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/addn_hosts - 1 addresses Dec 5 05:18:07 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/host Dec 5 05:18:07 localhost podman[324826]: 2025-12-05 10:18:07.625868976 +0000 UTC m=+0.057874512 container kill d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:18:07 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/opts Dec 5 05:18:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e247 do_prune osdmap full prune enabled Dec 5 05:18:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e248 e248: 6 total, 6 up, 6 in Dec 5 05:18:07 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e248: 6 total, 6 up, 6 in Dec 5 05:18:07 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:07.900 261902 INFO neutron.agent.dhcp.agent [None req-ad812ad9-7ee3-4309-9d92-6e5671e3f4e2 - - - - - -] DHCP configuration for ports {'9b68f2f4-549d-41ca-a4f4-2e59f560cd9e'} is completed#033[00m Dec 5 05:18:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e248 do_prune osdmap full prune enabled Dec 5 05:18:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e249 e249: 6 total, 6 up, 6 in Dec 5 05:18:08 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e249: 6 total, 6 up, 6 in Dec 5 05:18:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 285 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 5.5 KiB/s rd, 47 KiB/s wr, 10 op/s Dec 5 05:18:09 localhost nova_compute[280228]: 2025-12-05 10:18:09.517 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:18:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:09 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:11 localhost nova_compute[280228]: 2025-12-05 10:18:11.368 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:11 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:11 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 119 KiB/s wr, 45 op/s Dec 5 05:18:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e249 do_prune osdmap full prune enabled Dec 5 05:18:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e250 e250: 6 total, 6 up, 6 in Dec 5 05:18:12 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e250: 6 total, 6 up, 6 in Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.955 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.960 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0097abe7-0277-47f2-af47-71b531392547', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.956575', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4bd29a0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '3c3e3602a9bb4da11fb38a2874d8713dfa5d2527f9a97700346476ad45efe412'}]}, 'timestamp': '2025-12-05 10:18:12.961201', '_unique_id': '08e88c8d3a4b44bc8b070edcf7ad841c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.964 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.964 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a6af426a-a4c1-4d9e-847b-c71e1c2ea128', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.964231', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4bdb6e0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '02b9c51b66bf11b21c9363b5c4d3643de9ca53b151a6eacceb016bbef35954fc'}]}, 'timestamp': '2025-12-05 10:18:12.964765', '_unique_id': '1098407350174e3aa10f0ec5120889ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.965 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.966 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.967 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.967 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '572e542c-dc0f-4b2c-b767-c9ed2c66bc5e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.967173', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4be29fe-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': 'e7f55e9c2c7af5ad34ddbfd754620b7c6d09d5dd8b9c58a359c0f401bd4d0487'}]}, 'timestamp': '2025-12-05 10:18:12.967708', '_unique_id': 'e5700920642e454ea874a9cf38e00fe2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.968 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.969 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.988 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 19580000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4dd6aceb-53c7-4c7a-b503-5bbb102683c7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 19580000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:18:12.969789', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'b4c16d6c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.162783773, 'message_signature': '3ca7863461a97f12db4ebf6eb18b66278cf4ee686a14eb16532474776956e602'}]}, 'timestamp': '2025-12-05 10:18:12.989107', '_unique_id': '6f9d13cb0bd6436289b679f63a63e8b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.990 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.991 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.991 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2411ee4e-6b39-410b-8d81-7bb438970988', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.991405', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4c1d9be-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': 'b2908202f409ec00d4eb430b1e3cffcc2a3cdf6d16cdceaf48eb96d1e8b75123'}]}, 'timestamp': '2025-12-05 10:18:12.991870', '_unique_id': '2da93914df2646eba614fbf5f5bf0182'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.992 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.993 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.994 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.994 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '317289d6-5bf6-4666-a801-c41c668b5a42', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.994109', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4c24476-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': 'b3e46d5b58c21cc6649961addcb860ba096766647c9a0c10651f6f97b4a98a07'}]}, 'timestamp': '2025-12-05 10:18:12.994597', '_unique_id': 'f0debdb6feb4485da91650630700a2a8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.995 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.996 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.996 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dcf49663-bd41-440c-b3dc-c5aa200a6e5d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.996694', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4c2a7f4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '564873776519a96c35b96a5be6070d84a5d91d021760080c2219846e4d48f3eb'}]}, 'timestamp': '2025-12-05 10:18:12.997143', '_unique_id': 'c51aa69646b149f1b559668b427a18be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.999 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:12.999 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b0dde01-787b-425d-b0e2-b9f9413595ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:12.999231', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4c30cda-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '6a178d5ffbe6b79d82040defc06550d7eab927e2a387108f2fa435bc36dbfcdf'}]}, 'timestamp': '2025-12-05 10:18:12.999725', '_unique_id': '5fb7da3243c5419c9e6e5907dfb2e18a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.001 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.028 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77e89167-9152-40f5-b0d8-35435017ef52', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.001789', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4c78ff8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '1e3bc79492ccafc0c48f3a60edc5bb5f0241788d7b8fbe052bfcbfc57015f588'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.001789', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4c7a4ca-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '2e8116ec2f4c7c6f5795691462f91a343835f4918559dab07b85e6a6f90436f7'}]}, 'timestamp': '2025-12-05 10:18:13.029811', '_unique_id': '911afbae209d4445b2b0d55c759de904'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.030 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.032 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.032 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5ebeeb33-7010-4b7c-a4da-466e7298ee38', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.032309', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4c817de-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '18afc2cf36ba08a81285e8e00123cf5769afbeecc9c384de8a6eab83f00a6b37'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.032309', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4c8283c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '4c4bb7375b30319e6a8bbc00a3e81456511ae03beca9a5db68c30b6bb3790bed'}]}, 'timestamp': '2025-12-05 10:18:13.033167', '_unique_id': '69ca148e16df4f82add1e0858f796d17'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.034 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.035 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.035 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '472aed60-9f60-4a00-9831-9d8deef1cb37', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.035529', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4c894de-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '824e13f7d591f955b144240c395046e8b9a94d2dd5f3120f5c1daeb8e7e1c03d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.035529', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4c8a4ce-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '33140dc3fda4bd761e3806da10f7798f0089244bd14082235bbfb1433e6d5065'}]}, 'timestamp': '2025-12-05 10:18:13.036382', '_unique_id': 'b55d0481b97448aca8ef2fb71be8f2eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.037 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.038 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.038 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.038 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '562eb635-d433-47bb-ab2e-7b37a2ed1c5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:18:13.038616', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'b4c90e6e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.162783773, 'message_signature': '4b907b68577503fa7d0152f85c4688369bfb54297fa387c669b4339d90a6d06b'}]}, 'timestamp': '2025-12-05 10:18:13.039077', '_unique_id': 'fb4ed0b03c9d471c924c4f7eb861d6fa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.039 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.041 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.041 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '61448d13-55af-46d1-9384-4d14eafb3cce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.041191', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4c97390-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '887833ebd31727cfbad39f874e6c3c3cdc864d9cecee2252f38a8ed6a5f59b51'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.041191', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4c983d0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': 'd75ce183a3145a02dfc5ea9d3a353c75b9c97980e4270cd31bd20861e0464cf2'}]}, 'timestamp': '2025-12-05 10:18:13.042065', '_unique_id': 'd8c1a8e22adb44028e646ccb882c9a57'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.043 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.044 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.044 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '874237c7-c036-4c47-ab28-2bb3545a0f25', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:13.044272', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4c9eb36-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '7cd28c58c48a129dbdef0e082760016bfb690672c84403bba3888e86a87cabcf'}]}, 'timestamp': '2025-12-05 10:18:13.044741', '_unique_id': '2d642d2027114fa8ac0ac44f559d453e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6293f3c6-8c2f-4c1a-9ccd-79ae521df4e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:13.047138', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4ca5f8a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '2dd5b870eb9b9888c8a08762f2b8c0631e730776d99ad8e186562f570428467b'}]}, 'timestamp': '2025-12-05 10:18:13.047823', '_unique_id': '0484f163b03e4850a6f523968a9a4177'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.048 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.049 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.061 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e018e7bf-a9c2-49ab-8182-850bee97a89f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.050046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4cc7626-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.224360329, 'message_signature': 'c591a3b60dcb9324a54c93fa807fcc4d6f5e2ec5596e0468e884a87640ad35ca'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.050046', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4cc880a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.224360329, 'message_signature': 'dc7cf0714a8765df5fc3f9e66d891277e6ddf70f8dddf495589ba0670902ee97'}]}, 'timestamp': '2025-12-05 10:18:13.061837', '_unique_id': 'c66bf4a7559b43ba90932fe4f1a4567e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.063 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4f79be78-e62d-4f53-bc5a-345c63da9ee1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.064088', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4ccf1b4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.224360329, 'message_signature': 'b74d52e726411185acabd71dc5021064aa6ee79ca77dce3280dd5f19f7e7a3ea'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.064088', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4cd019a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.224360329, 'message_signature': '98c49a3c51c330423c31556b280f347a2da413917c128a917d7a2d47ad66f381'}]}, 'timestamp': '2025-12-05 10:18:13.064941', '_unique_id': 'd0f08ae9992249d19b22c200923574c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a5959c64-b2ae-44b3-b3a2-af8302cf5f87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:18:13.067119', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'b4cd68ba-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.130901637, 'message_signature': '3c54caeb160d8ee1e908cd5920386c96975270cfda7b9272220a7ad49a170282'}]}, 'timestamp': '2025-12-05 10:18:13.067617', '_unique_id': '4724f68b8cae43e5b829c579f7939959'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.069 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.070 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '88d81178-4dc8-46b3-9d71-0732efc037ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.070015', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4cdd80e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.224360329, 'message_signature': 'c21c4da6f88c42cb872fa5df011afa138c8951c4bd91f4fab215f1afd3eb0578'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.070015', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4cdeab0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.224360329, 'message_signature': '218e1f9e68c60f274b57b7adc3b774ae85b30d6e1ed7fa2b875af0acd09e67eb'}]}, 'timestamp': '2025-12-05 10:18:13.070945', '_unique_id': 'dfa0bb16d79f4635adda21ac17bdb531'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.072 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.073 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4781a650-25f0-4e3e-94b5-1f23a6a8d495', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.073118', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4ce5360-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': 'afeaa0487d70b7908a3ccc68d40c9ccb4a901e864f2fec23dd9733462e48af2b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.073118', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4ce63d2-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': 'ce82b2485e2dd0499be18607fc77c9b4484561173d6319938ece3f8ef19574a6'}]}, 'timestamp': '2025-12-05 10:18:13.074015', '_unique_id': '623c4ecb6a894019a6f0e2bd49abe8b6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.076 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2cbdba43-3fbc-4a06-9980-d313282f68d9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:18:13.076358', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b4ced07e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': 'b040ec9ef5a5792d60f72a9bc527580518d49605019cc43286cc933f696f897b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:18:13.076358', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b4cee0b4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13008.17608866, 'message_signature': '9d40c5057e8a23b84bc5b53aa5d34f8bac580dbe7e229dd39727260fea0234cf'}]}, 'timestamp': '2025-12-05 10:18:13.077211', '_unique_id': 'd01bc773bfaf4119901f84f22162d1c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:18:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:18:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:18:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:18:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e250 do_prune osdmap full prune enabled Dec 5 05:18:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 71 KiB/s wr, 36 op/s Dec 5 05:18:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e251 e251: 6 total, 6 up, 6 in Dec 5 05:18:13 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e251: 6 total, 6 up, 6 in Dec 5 05:18:14 localhost nova_compute[280228]: 2025-12-05 10:18:14.277 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:18:14 localhost podman[324847]: 2025-12-05 10:18:14.390127074 +0000 UTC m=+0.074507072 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container) Dec 5 05:18:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:18:14 localhost podman[324847]: 2025-12-05 10:18:14.429382866 +0000 UTC m=+0.113762854 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:18:14 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:18:14 localhost podman[324867]: 2025-12-05 10:18:14.486632539 +0000 UTC m=+0.076693829 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 05:18:14 localhost podman[324867]: 2025-12-05 10:18:14.502986609 +0000 UTC m=+0.093047899 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 5 05:18:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:18:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, vol_name:cephfs) < "" Dec 5 05:18:14 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:18:14 localhost nova_compute[280228]: 2025-12-05 10:18:14.520 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d8bed738-7e8a-4e2b-9281-ea986a53728d/.meta.tmp' Dec 5 05:18:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d8bed738-7e8a-4e2b-9281-ea986a53728d/.meta.tmp' to config b'/volumes/_nogroup/d8bed738-7e8a-4e2b-9281-ea986a53728d/.meta' Dec 5 05:18:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, vol_name:cephfs) < "" Dec 5 05:18:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "format": "json"}]: dispatch Dec 5 05:18:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, vol_name:cephfs) < "" Dec 5 05:18:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, vol_name:cephfs) < "" Dec 5 05:18:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:18:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:18:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:18:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:18:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:18:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:18:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 64 KiB/s wr, 33 op/s Dec 5 05:18:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e251 do_prune osdmap full prune enabled Dec 5 05:18:15 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e252 e252: 6 total, 6 up, 6 in Dec 5 05:18:15 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e252: 6 total, 6 up, 6 in Dec 5 05:18:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:18:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:18:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:18:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:16 localhost nova_compute[280228]: 2025-12-05 10:18:16.403 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:18:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:18:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:16 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:18:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3226208543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:18:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 50 KiB/s wr, 67 op/s Dec 5 05:18:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:18:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:18:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:18:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:18:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:18:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:17 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev cfc88458-fd9d-4ca7-b3de-4a1401cb15e1 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:18:17 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev cfc88458-fd9d-4ca7-b3de-4a1401cb15e1 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:18:17 localhost ceph-mgr[286454]: [progress INFO root] Completed event cfc88458-fd9d-4ca7-b3de-4a1401cb15e1 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:18:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:18:17 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:18:17 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:18:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "format": "json"}]: dispatch Dec 5 05:18:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:18:18.265+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8bed738-7e8a-4e2b-9281-ea986a53728d' of type subvolume Dec 5 05:18:18 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8bed738-7e8a-4e2b-9281-ea986a53728d' of type subvolume Dec 5 05:18:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d8bed738-7e8a-4e2b-9281-ea986a53728d", "force": true, "format": "json"}]: dispatch Dec 5 05:18:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, vol_name:cephfs) < "" Dec 5 05:18:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d8bed738-7e8a-4e2b-9281-ea986a53728d'' moved to trashcan Dec 5 05:18:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:18:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8bed738-7e8a-4e2b-9281-ea986a53728d, vol_name:cephfs) < "" Dec 5 05:18:18 localhost nova_compute[280228]: 2025-12-05 10:18:18.351 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e252 do_prune osdmap full prune enabled Dec 5 05:18:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e253 e253: 6 total, 6 up, 6 in Dec 5 05:18:18 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e253: 6 total, 6 up, 6 in Dec 5 05:18:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 49 KiB/s wr, 66 op/s Dec 5 05:18:19 localhost nova_compute[280228]: 2025-12-05 10:18:19.523 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e253 do_prune osdmap full prune enabled Dec 5 05:18:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e254 e254: 6 total, 6 up, 6 in Dec 5 05:18:19 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e254: 6 total, 6 up, 6 in Dec 5 05:18:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:18:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:19 localhost podman[239519]: time="2025-12-05T10:18:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:18:19 localhost podman[239519]: @ - - [05/Dec/2025:10:18:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157935 "" "Go-http-client/1.1" Dec 5 05:18:19 localhost podman[239519]: @ - - [05/Dec/2025:10:18:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19755 "" "Go-http-client/1.1" Dec 5 05:18:20 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:18:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:18:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:18:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e254 do_prune osdmap full prune enabled Dec 5 05:18:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e255 e255: 6 total, 6 up, 6 in Dec 5 05:18:21 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e255: 6 total, 6 up, 6 in Dec 5 05:18:21 localhost dnsmasq[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/addn_hosts - 0 addresses Dec 5 05:18:21 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/host Dec 5 05:18:21 localhost dnsmasq-dhcp[324771]: read /var/lib/neutron/dhcp/dc0f3e29-bb12-4213-a01a-239435c4f0ff/opts Dec 5 05:18:21 localhost podman[324991]: 2025-12-05 10:18:21.354485829 +0000 UTC m=+0.059073579 container kill d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:18:21 localhost nova_compute[280228]: 2025-12-05 10:18:21.450 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 132 KiB/s wr, 124 op/s Dec 5 05:18:21 localhost kernel: device tapf211f6b1-fb left promiscuous mode Dec 5 05:18:21 localhost ovn_controller[153000]: 2025-12-05T10:18:21Z|00388|binding|INFO|Releasing lport f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 from this chassis (sb_readonly=0) Dec 5 05:18:21 localhost nova_compute[280228]: 2025-12-05 10:18:21.560 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:21 localhost ovn_controller[153000]: 2025-12-05T10:18:21Z|00389|binding|INFO|Setting lport f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 down in Southbound Dec 5 05:18:21 localhost nova_compute[280228]: 2025-12-05 10:18:21.586 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:21.590 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-dc0f3e29-bb12-4213-a01a-239435c4f0ff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc0f3e29-bb12-4213-a01a-239435c4f0ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '080c7340f1314c6a8594ab191f7cd011', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae731555-340d-429f-9e6e-542f49bcf292, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f211f6b1-fbad-4ca8-be46-6b7dc3fa4582) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:18:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:21.592 158820 INFO neutron.agent.ovn.metadata.agent [-] Port f211f6b1-fbad-4ca8-be46-6b7dc3fa4582 in datapath dc0f3e29-bb12-4213-a01a-239435c4f0ff unbound from our chassis#033[00m Dec 5 05:18:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:21.595 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc0f3e29-bb12-4213-a01a-239435c4f0ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:18:21 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:21.596 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[58deaf6c-a16f-4897-9325-e68d4503a218]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:18:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e255 do_prune osdmap full prune enabled Dec 5 05:18:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e256 e256: 6 total, 6 up, 6 in Dec 5 05:18:22 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e256: 6 total, 6 up, 6 in Dec 5 05:18:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:18:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:18:22 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:22 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 101 KiB/s wr, 75 op/s Dec 5 05:18:23 localhost ovn_controller[153000]: 2025-12-05T10:18:23Z|00390|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:18:23 localhost nova_compute[280228]: 2025-12-05 10:18:23.607 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:23 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:23 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:23 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e256 do_prune osdmap full prune enabled Dec 5 05:18:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e257 e257: 6 total, 6 up, 6 in Dec 5 05:18:24 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e257: 6 total, 6 up, 6 in Dec 5 05:18:24 localhost dnsmasq[324771]: exiting on receipt of SIGTERM Dec 5 05:18:24 localhost systemd[1]: libpod-d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046.scope: Deactivated successfully. Dec 5 05:18:24 localhost podman[325032]: 2025-12-05 10:18:24.164773225 +0000 UTC m=+0.060322218 container kill d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 5 05:18:24 localhost podman[325044]: 2025-12-05 10:18:24.243521716 +0000 UTC m=+0.066122285 container died d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:18:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046-userdata-shm.mount: Deactivated successfully. Dec 5 05:18:24 localhost systemd[1]: var-lib-containers-storage-overlay-199083f31ce455f8d564ac6a61552160968f7a0b4a29cd0bce92f68c5eba39a5-merged.mount: Deactivated successfully. Dec 5 05:18:24 localhost podman[325044]: 2025-12-05 10:18:24.287138932 +0000 UTC m=+0.109739461 container cleanup d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:18:24 localhost systemd[1]: libpod-conmon-d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046.scope: Deactivated successfully. Dec 5 05:18:24 localhost podman[325048]: 2025-12-05 10:18:24.316195971 +0000 UTC m=+0.127383241 container remove d823b3040cd40ede9bd4053fc325f4d8cc3d42bbdd67d27a96ed4a616a7a2046 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc0f3e29-bb12-4213-a01a-239435c4f0ff, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:18:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:24.406 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.524 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.525 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.526 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.528 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.548 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:24 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:24.654 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:18:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:18:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/148606033' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:18:24 localhost nova_compute[280228]: 2025-12-05 10:18:24.986 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.056 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.057 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:18:25 localhost systemd[1]: run-netns-qdhcp\x2ddc0f3e29\x2dbb12\x2d4213\x2da01a\x2d239435c4f0ff.mount: Deactivated successfully. Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.302 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.305 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11081MB free_disk=41.70021057128906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.305 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.306 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.392 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.393 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.393 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.450 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:18:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 85 KiB/s wr, 62 op/s Dec 5 05:18:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:18:25 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2805542685' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.933 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.940 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.955 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.956 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:18:25 localhost nova_compute[280228]: 2025-12-05 10:18:25.957 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:18:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:18:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:26 localhost systemd[1]: tmp-crun.px2rO1.mount: Deactivated successfully. Dec 5 05:18:26 localhost podman[325119]: 2025-12-05 10:18:26.201222607 +0000 UTC m=+0.079150474 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:18:26 localhost podman[325117]: 2025-12-05 10:18:26.26372371 +0000 UTC m=+0.140013338 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:18:26 localhost podman[325118]: 2025-12-05 10:18:26.225369087 +0000 UTC m=+0.100276542 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 5 05:18:26 localhost podman[325119]: 2025-12-05 10:18:26.288692315 +0000 UTC m=+0.166620182 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:18:26 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:18:26 localhost podman[325118]: 2025-12-05 10:18:26.307301025 +0000 UTC m=+0.182208550 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:18:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:18:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:26 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:18:26 localhost podman[325117]: 2025-12-05 10:18:26.328525995 +0000 UTC m=+0.204815563 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:18:26 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:18:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:18:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:18:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:18:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:18:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:18:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:26 localhost nova_compute[280228]: 2025-12-05 10:18:26.488 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:18:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:18:26 localhost nova_compute[280228]: 2025-12-05 10:18:26.958 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:27 localhost openstack_network_exporter[241668]: ERROR 10:18:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:18:27 localhost openstack_network_exporter[241668]: ERROR 10:18:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:18:27 localhost openstack_network_exporter[241668]: ERROR 10:18:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:18:27 localhost openstack_network_exporter[241668]: ERROR 10:18:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:18:27 localhost openstack_network_exporter[241668]: Dec 5 05:18:27 localhost openstack_network_exporter[241668]: ERROR 10:18:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:18:27 localhost openstack_network_exporter[241668]: Dec 5 05:18:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 66 KiB/s wr, 101 op/s Dec 5 05:18:27 localhost nova_compute[280228]: 2025-12-05 10:18:27.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:27 localhost ovn_controller[153000]: 2025-12-05T10:18:27Z|00391|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:18:27 localhost nova_compute[280228]: 2025-12-05 10:18:27.684 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:29 localhost neutron_sriov_agent[254996]: 2025-12-05 10:18:29.015 2 INFO neutron.agent.securitygroups_rpc [req-32c68d04-fa19-4a2e-82f4-af84e05aac5a req-b8f049fe-8e8e-43d6-8b20-d70007e03bd7 0b795e7702e342d9821a3667644be5b0 3554a89b305c449f9fd292eca5647512 - - default default] Security group member updated ['811851ce-aefb-4b50-bb3d-fd5f8bc97e90']#033[00m Dec 5 05:18:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 286 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 50 KiB/s wr, 40 op/s Dec 5 05:18:29 localhost nova_compute[280228]: 2025-12-05 10:18:29.529 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:18:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:18:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:29 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:29 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:29 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:29 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:30 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:18:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:30 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/.meta.tmp' Dec 5 05:18:30 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/.meta.tmp' to config b'/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/.meta' Dec 5 05:18:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:30 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "format": "json"}]: dispatch Dec 5 05:18:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:18:30 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.581 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.581 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.582 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:18:30 localhost nova_compute[280228]: 2025-12-05 10:18:30.582 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:18:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e257 do_prune osdmap full prune enabled Dec 5 05:18:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e258 e258: 6 total, 6 up, 6 in Dec 5 05:18:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e258: 6 total, 6 up, 6 in Dec 5 05:18:31 localhost nova_compute[280228]: 2025-12-05 10:18:31.391 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:18:31 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 5 05:18:31 localhost nova_compute[280228]: 2025-12-05 10:18:31.412 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:18:31 localhost nova_compute[280228]: 2025-12-05 10:18:31.412 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:18:31 localhost nova_compute[280228]: 2025-12-05 10:18:31.413 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 232 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 108 KiB/s wr, 71 op/s Dec 5 05:18:31 localhost nova_compute[280228]: 2025-12-05 10:18:31.539 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e258 do_prune osdmap full prune enabled Dec 5 05:18:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e259 e259: 6 total, 6 up, 6 in Dec 5 05:18:32 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e259: 6 total, 6 up, 6 in Dec 5 05:18:32 localhost nova_compute[280228]: 2025-12-05 10:18:32.411 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:32.979 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:18:32 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:32.980 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:18:32 localhost nova_compute[280228]: 2025-12-05 10:18:32.982 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:18:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e259 do_prune osdmap full prune enabled Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:18:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e260 e260: 6 total, 6 up, 6 in Dec 5 05:18:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:18:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e260: 6 total, 6 up, 6 in Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "auth_id": "tempest-cephx-id-1284922822", "tenant_id": "713485f6825d4fbb96a3a6dfd0cac4e0", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1284922822, format:json, prefix:fs subvolume authorize, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, tenant_id:713485f6825d4fbb96a3a6dfd0cac4e0, vol_name:cephfs) < "" Dec 5 05:18:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} v 0) Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} : dispatch Dec 5 05:18:33 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID tempest-cephx-id-1284922822 with tenant 713485f6825d4fbb96a3a6dfd0cac4e0 Dec 5 05:18:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 78 KiB/s wr, 63 op/s Dec 5 05:18:33 localhost nova_compute[280228]: 2025-12-05 10:18:33.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:33 localhost nova_compute[280228]: 2025-12-05 10:18:33.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:33 localhost nova_compute[280228]: 2025-12-05 10:18:33.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:18:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1284922822, format:json, prefix:fs subvolume authorize, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, tenant_id:713485f6825d4fbb96a3a6dfd0cac4e0, vol_name:cephfs) < "" Dec 5 05:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:18:34 localhost systemd[1]: tmp-crun.iDDuC0.mount: Deactivated successfully. Dec 5 05:18:34 localhost podman[325177]: 2025-12-05 10:18:34.193425282 +0000 UTC m=+0.076404010 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:18:34 localhost systemd[1]: tmp-crun.M7Bkyi.mount: Deactivated successfully. Dec 5 05:18:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e260 do_prune osdmap full prune enabled Dec 5 05:18:34 localhost podman[325178]: 2025-12-05 10:18:34.207298667 +0000 UTC m=+0.083903250 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:18:34 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "auth_id": "tempest-cephx-id-1284922822", "format": "json"}]: dispatch Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1284922822, format:json, prefix:fs subvolume deauthorize, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:18:34 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} : dispatch Dec 5 05:18:34 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:34 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1284922822", "caps": ["mds", "allow rw path=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d", "osd", "allow rw pool=manila_data namespace=fsvolumens_9c8dec9b-5cd8-4827-b98c-9462c244cafe", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e261 e261: 6 total, 6 up, 6 in Dec 5 05:18:34 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e261: 6 total, 6 up, 6 in Dec 5 05:18:34 localhost podman[325178]: 2025-12-05 10:18:34.245643111 +0000 UTC m=+0.122247674 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:18:34 localhost podman[325177]: 2025-12-05 10:18:34.258102732 +0000 UTC m=+0.141081520 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller) Dec 5 05:18:34 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:18:34 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:18:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} v 0) Dec 5 05:18:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} : dispatch Dec 5 05:18:34 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"} v 0) Dec 5 05:18:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"} : dispatch Dec 5 05:18:34 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"}]': finished Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1284922822, format:json, prefix:fs subvolume deauthorize, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "auth_id": "tempest-cephx-id-1284922822", "format": "json"}]: dispatch Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1284922822, format:json, prefix:fs subvolume evict, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1284922822, client_metadata.root=/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe/15d5ebce-8438-4ec7-a76c-c06816e0fb9d Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1284922822, format:json, prefix:fs subvolume evict, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "format": "json"}]: dispatch Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:18:34.472+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9c8dec9b-5cd8-4827-b98c-9462c244cafe' of type subvolume Dec 5 05:18:34 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9c8dec9b-5cd8-4827-b98c-9462c244cafe' of type subvolume Dec 5 05:18:34 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9c8dec9b-5cd8-4827-b98c-9462c244cafe", "force": true, "format": "json"}]: dispatch Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9c8dec9b-5cd8-4827-b98c-9462c244cafe'' moved to trashcan Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:18:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c8dec9b-5cd8-4827-b98c-9462c244cafe, vol_name:cephfs) < "" Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.533 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:34 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:34.729 261902 INFO neutron.agent.linux.ip_lib [None req-1947eb4e-17e6-44cc-8d91-f6060ff6bc09 - - - - - -] Device tap5090b69a-5f cannot be used as it has no MAC address#033[00m Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.763 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:34 localhost kernel: device tap5090b69a-5f entered promiscuous mode Dec 5 05:18:34 localhost NetworkManager[5960]: [1764929914.7725] manager: (tap5090b69a-5f): new Generic device (/org/freedesktop/NetworkManager/Devices/63) Dec 5 05:18:34 localhost ovn_controller[153000]: 2025-12-05T10:18:34Z|00392|binding|INFO|Claiming lport 5090b69a-5f8a-477f-ba77-f5750b0b1cbf for this chassis. Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.772 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:34 localhost ovn_controller[153000]: 2025-12-05T10:18:34Z|00393|binding|INFO|5090b69a-5f8a-477f-ba77-f5750b0b1cbf: Claiming unknown Dec 5 05:18:34 localhost systemd-udevd[325234]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:18:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:34.781 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-67699f90-274c-4de2-851c-469be3e368f5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67699f90-274c-4de2-851c-469be3e368f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '479b844250bd442fbc27325400fc3e10', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379b05a3-fe72-4fd6-b7ea-5ecea0ce6bc3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5090b69a-5f8a-477f-ba77-f5750b0b1cbf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:18:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:34.782 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 5090b69a-5f8a-477f-ba77-f5750b0b1cbf in datapath 67699f90-274c-4de2-851c-469be3e368f5 bound to our chassis#033[00m Dec 5 05:18:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:34.784 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 67699f90-274c-4de2-851c-469be3e368f5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:18:34 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:34.785 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[72d69385-2f83-4adf-a2e0-f7439c35b6f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost ovn_controller[153000]: 2025-12-05T10:18:34Z|00394|binding|INFO|Setting lport 5090b69a-5f8a-477f-ba77-f5750b0b1cbf ovn-installed in OVS Dec 5 05:18:34 localhost ovn_controller[153000]: 2025-12-05T10:18:34Z|00395|binding|INFO|Setting lport 5090b69a-5f8a-477f-ba77-f5750b0b1cbf up in Southbound Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.810 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost journal[228791]: ethtool ioctl error on tap5090b69a-5f: No such device Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.853 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:34 localhost nova_compute[280228]: 2025-12-05 10:18:34.888 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1284922822", "format": "json"} : dispatch Dec 5 05:18:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"} : dispatch Dec 5 05:18:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1284922822"}]': finished Dec 5 05:18:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 5.3 KiB/s wr, 80 op/s Dec 5 05:18:35 localhost podman[325305]: Dec 5 05:18:35 localhost podman[325305]: 2025-12-05 10:18:35.810598247 +0000 UTC m=+0.089232583 container create 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:18:35 localhost systemd[1]: Started libpod-conmon-225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755.scope. Dec 5 05:18:35 localhost podman[325305]: 2025-12-05 10:18:35.767018303 +0000 UTC m=+0.045652589 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:18:35 localhost systemd[1]: Started libcrun container. Dec 5 05:18:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd16d7cae5524f71b2c766e5554b2af0b094de75581255dbca12e3f72a6d25e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:18:35 localhost podman[325305]: 2025-12-05 10:18:35.881509898 +0000 UTC m=+0.160144194 container init 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:18:35 localhost podman[325305]: 2025-12-05 10:18:35.890579476 +0000 UTC m=+0.169213762 container start 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:18:35 localhost dnsmasq[325323]: started, version 2.85 cachesize 150 Dec 5 05:18:35 localhost dnsmasq[325323]: DNS service limited to local subnets Dec 5 05:18:35 localhost dnsmasq[325323]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:18:35 localhost dnsmasq[325323]: warning: no upstream servers configured Dec 5 05:18:35 localhost dnsmasq-dhcp[325323]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:18:35 localhost dnsmasq[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/addn_hosts - 0 addresses Dec 5 05:18:35 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/host Dec 5 05:18:35 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/opts Dec 5 05:18:36 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:36.109 261902 INFO neutron.agent.dhcp.agent [None req-324cd3fe-f88f-4a24-a0ca-f7ffdfae6dff - - - - - -] DHCP configuration for ports {'76866f44-5700-407f-ac2c-ab7e31824883'} is completed#033[00m Dec 5 05:18:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:18:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:18:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:36 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:36 localhost nova_compute[280228]: 2025-12-05 10:18:36.540 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:37 localhost nova_compute[280228]: 2025-12-05 10:18:37.426 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 136 KiB/s wr, 86 op/s Dec 5 05:18:37 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:37.983 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:18:38 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:38.202 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:37Z, description=, device_id=25534201-3c1b-4c21-8e5f-b7970d5743fd, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5d735f74-f36d-4bbf-ba53-d01a00bcb85d, ip_allocation=immediate, mac_address=fa:16:3e:f8:e9:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:33Z, description=, dns_domain=, id=67699f90-274c-4de2-851c-469be3e368f5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-1944940311-network, port_security_enabled=True, project_id=479b844250bd442fbc27325400fc3e10, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56729, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3570, status=ACTIVE, subnets=['58ecbbea-394e-4179-9213-4317dcc3aa78'], tags=[], tenant_id=479b844250bd442fbc27325400fc3e10, updated_at=2025-12-05T10:18:33Z, vlan_transparent=None, network_id=67699f90-274c-4de2-851c-469be3e368f5, port_security_enabled=False, project_id=479b844250bd442fbc27325400fc3e10, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3582, status=DOWN, tags=[], tenant_id=479b844250bd442fbc27325400fc3e10, updated_at=2025-12-05T10:18:37Z on network 67699f90-274c-4de2-851c-469be3e368f5#033[00m Dec 5 05:18:38 localhost dnsmasq[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/addn_hosts - 1 addresses Dec 5 05:18:38 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/host Dec 5 05:18:38 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/opts Dec 5 05:18:38 localhost podman[325339]: 2025-12-05 10:18:38.519501888 +0000 UTC m=+0.062116643 container kill 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:18:38 localhost systemd[1]: tmp-crun.zSYmjl.mount: Deactivated successfully. Dec 5 05:18:38 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:38.789 261902 INFO neutron.agent.dhcp.agent [None req-030d74bd-77c6-4961-b0e9-bbfef8dc1b86 - - - - - -] DHCP configuration for ports {'5d735f74-f36d-4bbf-ba53-d01a00bcb85d'} is completed#033[00m Dec 5 05:18:38 localhost ovn_controller[153000]: 2025-12-05T10:18:38Z|00396|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:18:38 localhost nova_compute[280228]: 2025-12-05 10:18:38.988 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 111 KiB/s wr, 54 op/s Dec 5 05:18:39 localhost nova_compute[280228]: 2025-12-05 10:18:39.535 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:39 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:39.605 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:37Z, description=, device_id=25534201-3c1b-4c21-8e5f-b7970d5743fd, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5d735f74-f36d-4bbf-ba53-d01a00bcb85d, ip_allocation=immediate, mac_address=fa:16:3e:f8:e9:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:33Z, description=, dns_domain=, id=67699f90-274c-4de2-851c-469be3e368f5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-1944940311-network, port_security_enabled=True, project_id=479b844250bd442fbc27325400fc3e10, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56729, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3570, status=ACTIVE, subnets=['58ecbbea-394e-4179-9213-4317dcc3aa78'], tags=[], tenant_id=479b844250bd442fbc27325400fc3e10, updated_at=2025-12-05T10:18:33Z, vlan_transparent=None, network_id=67699f90-274c-4de2-851c-469be3e368f5, port_security_enabled=False, project_id=479b844250bd442fbc27325400fc3e10, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3582, status=DOWN, tags=[], tenant_id=479b844250bd442fbc27325400fc3e10, updated_at=2025-12-05T10:18:37Z on network 67699f90-274c-4de2-851c-469be3e368f5#033[00m Dec 5 05:18:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:18:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:18:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:18:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:39 localhost podman[325375]: 2025-12-05 10:18:39.852363367 +0000 UTC m=+0.073824411 container kill 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:18:39 localhost dnsmasq[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/addn_hosts - 1 addresses Dec 5 05:18:39 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/host Dec 5 05:18:39 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/opts Dec 5 05:18:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:18:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:40 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:40.134 261902 INFO neutron.agent.dhcp.agent [None req-6e75ae8f-72c6-46e5-9ab4-15d5ff947ddf - - - - - -] DHCP configuration for ports {'5d735f74-f36d-4bbf-ba53-d01a00bcb85d'} is completed#033[00m Dec 5 05:18:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e261 do_prune osdmap full prune enabled Dec 5 05:18:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e262 e262: 6 total, 6 up, 6 in Dec 5 05:18:41 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e262: 6 total, 6 up, 6 in Dec 5 05:18:41 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:18:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, vol_name:cephfs) < "" Dec 5 05:18:41 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/628154e8-b5ef-4050-80fd-dcb33fc276cd/.meta.tmp' Dec 5 05:18:41 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/628154e8-b5ef-4050-80fd-dcb33fc276cd/.meta.tmp' to config b'/volumes/_nogroup/628154e8-b5ef-4050-80fd-dcb33fc276cd/.meta' Dec 5 05:18:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, vol_name:cephfs) < "" Dec 5 05:18:41 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "format": "json"}]: dispatch Dec 5 05:18:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, vol_name:cephfs) < "" Dec 5 05:18:41 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, vol_name:cephfs) < "" Dec 5 05:18:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:18:41 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:18:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 103 KiB/s wr, 53 op/s Dec 5 05:18:41 localhost nova_compute[280228]: 2025-12-05 10:18:41.581 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:43 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:18:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:18:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:43 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 120 KiB/s wr, 48 op/s Dec 5 05:18:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:43 localhost dnsmasq[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/addn_hosts - 0 addresses Dec 5 05:18:43 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/host Dec 5 05:18:43 localhost podman[325414]: 2025-12-05 10:18:43.796683294 +0000 UTC m=+0.046099672 container kill 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:18:43 localhost dnsmasq-dhcp[325323]: read /var/lib/neutron/dhcp/67699f90-274c-4de2-851c-469be3e368f5/opts Dec 5 05:18:44 localhost ovn_controller[153000]: 2025-12-05T10:18:44Z|00397|binding|INFO|Releasing lport 5090b69a-5f8a-477f-ba77-f5750b0b1cbf from this chassis (sb_readonly=0) Dec 5 05:18:44 localhost kernel: device tap5090b69a-5f left promiscuous mode Dec 5 05:18:44 localhost nova_compute[280228]: 2025-12-05 10:18:44.003 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:44 localhost ovn_controller[153000]: 2025-12-05T10:18:44Z|00398|binding|INFO|Setting lport 5090b69a-5f8a-477f-ba77-f5750b0b1cbf down in Southbound Dec 5 05:18:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:44.012 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-67699f90-274c-4de2-851c-469be3e368f5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67699f90-274c-4de2-851c-469be3e368f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '479b844250bd442fbc27325400fc3e10', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379b05a3-fe72-4fd6-b7ea-5ecea0ce6bc3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5090b69a-5f8a-477f-ba77-f5750b0b1cbf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:18:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:44.014 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 5090b69a-5f8a-477f-ba77-f5750b0b1cbf in datapath 67699f90-274c-4de2-851c-469be3e368f5 unbound from our chassis#033[00m Dec 5 05:18:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:44.017 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67699f90-274c-4de2-851c-469be3e368f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:18:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:44.019 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[2422d3c4-3e4f-462a-935b-74553fa5dbbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:18:44 localhost nova_compute[280228]: 2025-12-05 10:18:44.025 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:44 localhost nova_compute[280228]: 2025-12-05 10:18:44.537 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:18:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fea0968b-0e9f-458d-879e-5e4c7889c409, vol_name:cephfs) < "" Dec 5 05:18:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fea0968b-0e9f-458d-879e-5e4c7889c409/.meta.tmp' Dec 5 05:18:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fea0968b-0e9f-458d-879e-5e4c7889c409/.meta.tmp' to config b'/volumes/_nogroup/fea0968b-0e9f-458d-879e-5e4c7889c409/.meta' Dec 5 05:18:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fea0968b-0e9f-458d-879e-5e4c7889c409, vol_name:cephfs) < "" Dec 5 05:18:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "format": "json"}]: dispatch Dec 5 05:18:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fea0968b-0e9f-458d-879e-5e4c7889c409, vol_name:cephfs) < "" Dec 5 05:18:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fea0968b-0e9f-458d-879e-5e4c7889c409, vol_name:cephfs) < "" Dec 5 05:18:44 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:18:44 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:18:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:18:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:18:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:18:45 Dec 5 05:18:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:18:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:18:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['images', 'vms', 'manila_metadata', 'volumes', '.mgr', 'backups', 'manila_data'] Dec 5 05:18:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:18:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:18:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:18:45 localhost systemd[1]: tmp-crun.yWBxIs.mount: Deactivated successfully. Dec 5 05:18:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:18:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:18:45 localhost systemd[1]: tmp-crun.60LRNt.mount: Deactivated successfully. Dec 5 05:18:45 localhost podman[325438]: 2025-12-05 10:18:45.25388607 +0000 UTC m=+0.134515760 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Dec 5 05:18:45 localhost podman[325438]: 2025-12-05 10:18:45.266972801 +0000 UTC m=+0.147602541 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Dec 5 05:18:45 localhost podman[325437]: 2025-12-05 10:18:45.223576692 +0000 UTC m=+0.109464953 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 05:18:45 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:18:45 localhost podman[325437]: 2025-12-05 10:18:45.302768077 +0000 UTC m=+0.188656358 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd) Dec 5 05:18:45 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:18:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 111 KiB/s wr, 44 op/s Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00018084490740740742 quantized to 32 (current 32) Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:18:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0011745831297692165 of space, bias 4.0, pg target 0.9349681712962963 quantized to 16 (current 16) Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:18:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:18:45 localhost ovn_controller[153000]: 2025-12-05T10:18:45Z|00399|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:18:45 localhost nova_compute[280228]: 2025-12-05 10:18:45.953 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:18:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:46 localhost dnsmasq[325323]: exiting on receipt of SIGTERM Dec 5 05:18:46 localhost podman[325493]: 2025-12-05 10:18:46.512620221 +0000 UTC m=+0.069789698 container kill 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:18:46 localhost systemd[1]: tmp-crun.GYra7D.mount: Deactivated successfully. Dec 5 05:18:46 localhost systemd[1]: libpod-225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755.scope: Deactivated successfully. Dec 5 05:18:46 localhost nova_compute[280228]: 2025-12-05 10:18:46.583 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:46 localhost podman[325505]: 2025-12-05 10:18:46.589155274 +0000 UTC m=+0.061832484 container died 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 5 05:18:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:46 localhost podman[325505]: 2025-12-05 10:18:46.627043613 +0000 UTC m=+0.099720773 container cleanup 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2) Dec 5 05:18:46 localhost systemd[1]: libpod-conmon-225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755.scope: Deactivated successfully. Dec 5 05:18:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:18:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:18:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:46 localhost podman[325507]: 2025-12-05 10:18:46.673045422 +0000 UTC m=+0.138162971 container remove 225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67699f90-274c-4de2-851c-469be3e368f5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 5 05:18:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:46 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:46.708 261902 INFO neutron.agent.dhcp.agent [None req-07299bab-c2d3-4715-a407-381ea6c3081e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:47 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:47.079 261902 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:18:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 8 op/s Dec 5 05:18:47 localhost systemd[1]: var-lib-containers-storage-overlay-7dd16d7cae5524f71b2c766e5554b2af0b094de75581255dbca12e3f72a6d25e-merged.mount: Deactivated successfully. Dec 5 05:18:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-225cef130ecc262172f4e76ae5467975a923f421d82e0ad0d4dd1001c3270755-userdata-shm.mount: Deactivated successfully. Dec 5 05:18:47 localhost systemd[1]: run-netns-qdhcp\x2d67699f90\x2d274c\x2d4de2\x2d851c\x2d469be3e368f5.mount: Deactivated successfully. Dec 5 05:18:47 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:18:47 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:18:47 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:18:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "format": "json"}]: dispatch Dec 5 05:18:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fea0968b-0e9f-458d-879e-5e4c7889c409, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fea0968b-0e9f-458d-879e-5e4c7889c409, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:48 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:18:48.020+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fea0968b-0e9f-458d-879e-5e4c7889c409' of type subvolume Dec 5 05:18:48 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fea0968b-0e9f-458d-879e-5e4c7889c409' of type subvolume Dec 5 05:18:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fea0968b-0e9f-458d-879e-5e4c7889c409", "force": true, "format": "json"}]: dispatch Dec 5 05:18:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fea0968b-0e9f-458d-879e-5e4c7889c409, vol_name:cephfs) < "" Dec 5 05:18:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fea0968b-0e9f-458d-879e-5e4c7889c409'' moved to trashcan Dec 5 05:18:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:18:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fea0968b-0e9f-458d-879e-5e4c7889c409, vol_name:cephfs) < "" Dec 5 05:18:49 localhost nova_compute[280228]: 2025-12-05 10:18:49.436 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 8 op/s Dec 5 05:18:49 localhost nova_compute[280228]: 2025-12-05 10:18:49.540 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:49 localhost podman[239519]: time="2025-12-05T10:18:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:18:49 localhost podman[239519]: @ - - [05/Dec/2025:10:18:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:18:49 localhost podman[239519]: @ - - [05/Dec/2025:10:18:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19271 "" "Go-http-client/1.1" Dec 5 05:18:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:18:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:50 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:50 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:50 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:50 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:50 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:50 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:50 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:50 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "format": "json"}]: dispatch Dec 5 05:18:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:18:51 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:18:51.342+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '628154e8-b5ef-4050-80fd-dcb33fc276cd' of type subvolume Dec 5 05:18:51 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '628154e8-b5ef-4050-80fd-dcb33fc276cd' of type subvolume Dec 5 05:18:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "628154e8-b5ef-4050-80fd-dcb33fc276cd", "force": true, "format": "json"}]: dispatch Dec 5 05:18:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, vol_name:cephfs) < "" Dec 5 05:18:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/628154e8-b5ef-4050-80fd-dcb33fc276cd'' moved to trashcan Dec 5 05:18:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:18:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:628154e8-b5ef-4050-80fd-dcb33fc276cd, vol_name:cephfs) < "" Dec 5 05:18:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 594 B/s rd, 61 KiB/s wr, 9 op/s Dec 5 05:18:51 localhost nova_compute[280228]: 2025-12-05 10:18:51.629 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:52 localhost nova_compute[280228]: 2025-12-05 10:18:52.890 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:53 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:53.234 261902 INFO neutron.agent.linux.ip_lib [None req-f51a4984-3d48-476a-99da-73780c544ccc - - - - - -] Device tap2608db1b-38 cannot be used as it has no MAC address#033[00m Dec 5 05:18:53 localhost nova_compute[280228]: 2025-12-05 10:18:53.259 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:53 localhost kernel: device tap2608db1b-38 entered promiscuous mode Dec 5 05:18:53 localhost NetworkManager[5960]: [1764929933.2684] manager: (tap2608db1b-38): new Generic device (/org/freedesktop/NetworkManager/Devices/64) Dec 5 05:18:53 localhost ovn_controller[153000]: 2025-12-05T10:18:53Z|00400|binding|INFO|Claiming lport 2608db1b-3834-4ea7-bde5-b0e813318db8 for this chassis. Dec 5 05:18:53 localhost nova_compute[280228]: 2025-12-05 10:18:53.271 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:53 localhost ovn_controller[153000]: 2025-12-05T10:18:53Z|00401|binding|INFO|2608db1b-3834-4ea7-bde5-b0e813318db8: Claiming unknown Dec 5 05:18:53 localhost systemd-udevd[325547]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:18:53 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:53.283 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-71edc890-8371-4a0e-b04d-2a1fb14ba216', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71edc890-8371-4a0e-b04d-2a1fb14ba216', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '081feba9dde642f4af5966ca072a782c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a129bfcc-383e-41da-b81d-0f9023ba6e2b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2608db1b-3834-4ea7-bde5-b0e813318db8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:18:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:53.286 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 2608db1b-3834-4ea7-bde5-b0e813318db8 in datapath 71edc890-8371-4a0e-b04d-2a1fb14ba216 bound to our chassis#033[00m Dec 5 05:18:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:53.288 158820 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 71edc890-8371-4a0e-b04d-2a1fb14ba216 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 5 05:18:53 localhost ovn_metadata_agent[158815]: 2025-12-05 10:18:53.293 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[d59faa4f-6201-4f1e-bac4-e74b9dae865f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost ovn_controller[153000]: 2025-12-05T10:18:53Z|00402|binding|INFO|Setting lport 2608db1b-3834-4ea7-bde5-b0e813318db8 ovn-installed in OVS Dec 5 05:18:53 localhost ovn_controller[153000]: 2025-12-05T10:18:53Z|00403|binding|INFO|Setting lport 2608db1b-3834-4ea7-bde5-b0e813318db8 up in Southbound Dec 5 05:18:53 localhost nova_compute[280228]: 2025-12-05 10:18:53.310 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost journal[228791]: ethtool ioctl error on tap2608db1b-38: No such device Dec 5 05:18:53 localhost nova_compute[280228]: 2025-12-05 10:18:53.353 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:53 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:18:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:53 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:53 localhost nova_compute[280228]: 2025-12-05 10:18:53.387 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:53 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:53 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:53 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:53 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 101 KiB/s wr, 8 op/s Dec 5 05:18:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:54 localhost podman[325619]: Dec 5 05:18:54 localhost podman[325619]: 2025-12-05 10:18:54.1878731 +0000 UTC m=+0.093159563 container create 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:18:54 localhost podman[325619]: 2025-12-05 10:18:54.143577185 +0000 UTC m=+0.048863658 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:18:54 localhost systemd[1]: Started libpod-conmon-241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6.scope. Dec 5 05:18:54 localhost systemd[1]: Started libcrun container. Dec 5 05:18:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b42f2408152e6821be10d6b4ff24984891feceb9c063362af40dde1c43ae891/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:18:54 localhost podman[325619]: 2025-12-05 10:18:54.274808582 +0000 UTC m=+0.180095065 container init 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:18:54 localhost podman[325619]: 2025-12-05 10:18:54.28583344 +0000 UTC m=+0.191119903 container start 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 5 05:18:54 localhost dnsmasq[325638]: started, version 2.85 cachesize 150 Dec 5 05:18:54 localhost dnsmasq[325638]: DNS service limited to local subnets Dec 5 05:18:54 localhost dnsmasq[325638]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:18:54 localhost dnsmasq[325638]: warning: no upstream servers configured Dec 5 05:18:54 localhost dnsmasq-dhcp[325638]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:18:54 localhost dnsmasq[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/addn_hosts - 0 addresses Dec 5 05:18:54 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/host Dec 5 05:18:54 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/opts Dec 5 05:18:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:54.460 261902 INFO neutron.agent.dhcp.agent [None req-60fd3492-d4d8-4138-9212-ad2f896b9696 - - - - - -] DHCP configuration for ports {'d6068847-2862-40cc-b26c-ede9b41b62a1'} is completed#033[00m Dec 5 05:18:54 localhost nova_compute[280228]: 2025-12-05 10:18:54.544 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:54 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "32c34ad9-6951-4024-bed1-03795f491ee7", "mode": "0755", "format": "json"}]: dispatch Dec 5 05:18:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:32c34ad9-6951-4024-bed1-03795f491ee7, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 5 05:18:54 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:32c34ad9-6951-4024-bed1-03795f491ee7, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 5 05:18:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 77 KiB/s wr, 7 op/s Dec 5 05:18:55 localhost nova_compute[280228]: 2025-12-05 10:18:55.662 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e262 do_prune osdmap full prune enabled Dec 5 05:18:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 e263: 6 total, 6 up, 6 in Dec 5 05:18:55 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e263: 6 total, 6 up, 6 in Dec 5 05:18:56 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:56.156 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:55Z, description=, device_id=15508ce7-8984-4e34-ae2f-d504ebe80104, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cc3523ba-e956-46f9-9c3f-9250379abfa6, ip_allocation=immediate, mac_address=fa:16:3e:e6:04:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:51Z, description=, dns_domain=, id=71edc890-8371-4a0e-b04d-2a1fb14ba216, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-1000320401-network, port_security_enabled=True, project_id=081feba9dde642f4af5966ca072a782c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30955, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3626, status=ACTIVE, subnets=['0048f7e7-17c1-4372-bbff-52c09bf1bb86'], tags=[], tenant_id=081feba9dde642f4af5966ca072a782c, updated_at=2025-12-05T10:18:52Z, vlan_transparent=None, network_id=71edc890-8371-4a0e-b04d-2a1fb14ba216, port_security_enabled=False, project_id=081feba9dde642f4af5966ca072a782c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3633, status=DOWN, tags=[], tenant_id=081feba9dde642f4af5966ca072a782c, updated_at=2025-12-05T10:18:56Z on network 71edc890-8371-4a0e-b04d-2a1fb14ba216#033[00m Dec 5 05:18:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:18:56 localhost dnsmasq[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/addn_hosts - 1 addresses Dec 5 05:18:56 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/host Dec 5 05:18:56 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/opts Dec 5 05:18:56 localhost podman[325656]: 2025-12-05 10:18:56.357335755 +0000 UTC m=+0.059244234 container kill 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:18:56 localhost systemd[1]: tmp-crun.VdqrfE.mount: Deactivated successfully. Dec 5 05:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:18:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:18:56 localhost podman[325669]: 2025-12-05 10:18:56.475135242 +0000 UTC m=+0.086425067 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:18:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:18:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:56 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:18:56 localhost podman[325669]: 2025-12-05 10:18:56.516479358 +0000 UTC m=+0.127769183 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:18:56 localhost systemd[1]: tmp-crun.6s9n92.mount: Deactivated successfully. Dec 5 05:18:56 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:18:56 localhost podman[325670]: 2025-12-05 10:18:56.530296761 +0000 UTC m=+0.138286335 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:18:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:18:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:56 localhost podman[325670]: 2025-12-05 10:18:56.564522899 +0000 UTC m=+0.172512503 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:18:56 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:56 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:18:56 localhost podman[325671]: 2025-12-05 10:18:56.584931844 +0000 UTC m=+0.186725259 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 5 05:18:56 localhost podman[325671]: 2025-12-05 10:18:56.593821636 +0000 UTC m=+0.195614981 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 5 05:18:56 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:18:56 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:56.616 261902 INFO neutron.agent.dhcp.agent [None req-54c7dfc0-e78a-472a-88bb-5c73ffb3c2b4 - - - - - -] DHCP configuration for ports {'cc3523ba-e956-46f9-9c3f-9250379abfa6'} is completed#033[00m Dec 5 05:18:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:18:56 localhost nova_compute[280228]: 2025-12-05 10:18:56.631 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:18:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.816412) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936816460, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2527, "num_deletes": 262, "total_data_size": 2530405, "memory_usage": 2582848, "flush_reason": "Manual Compaction"} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936830431, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 2469392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36429, "largest_seqno": 38955, "table_properties": {"data_size": 2458632, "index_size": 6689, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26356, "raw_average_key_size": 22, "raw_value_size": 2435588, "raw_average_value_size": 2053, "num_data_blocks": 283, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929823, "oldest_key_time": 1764929823, "file_creation_time": 1764929936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 14069 microseconds, and 6577 cpu microseconds. Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.830480) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 2469392 bytes OK Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.830505) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.832373) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.832393) EVENT_LOG_v1 {"time_micros": 1764929936832386, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.832414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2519054, prev total WAL file size 2519054, number of live WAL files 2. Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.833189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(2411KB)], [66(16MB)] Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936833308, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 20272883, "oldest_snapshot_seqno": -1} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14015 keys, 19000451 bytes, temperature: kUnknown Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936939193, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 19000451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18918283, "index_size": 46035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 374229, "raw_average_key_size": 26, "raw_value_size": 18678092, "raw_average_value_size": 1332, "num_data_blocks": 1732, "num_entries": 14015, "num_filter_entries": 14015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764929936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.939548) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 19000451 bytes Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.941681) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.3 rd, 179.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 17.0 +0.0 blob) out(18.1 +0.0 blob), read-write-amplify(15.9) write-amplify(7.7) OK, records in: 14566, records dropped: 551 output_compression: NoCompression Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.941709) EVENT_LOG_v1 {"time_micros": 1764929936941696, "job": 40, "event": "compaction_finished", "compaction_time_micros": 105987, "compaction_time_cpu_micros": 55925, "output_level": 6, "num_output_files": 1, "total_output_size": 19000451, "num_input_records": 14566, "num_output_records": 14015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936942322, "job": 40, "event": "table_file_deletion", "file_number": 68} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764929936944786, "job": 40, "event": "table_file_deletion", "file_number": 66} Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.833081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.944896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.944902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.944906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.944909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:18:56 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:18:56.944912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:18:57 localhost openstack_network_exporter[241668]: ERROR 10:18:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:18:57 localhost openstack_network_exporter[241668]: ERROR 10:18:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:18:57 localhost openstack_network_exporter[241668]: ERROR 10:18:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:18:57 localhost openstack_network_exporter[241668]: ERROR 10:18:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:18:57 localhost openstack_network_exporter[241668]: Dec 5 05:18:57 localhost openstack_network_exporter[241668]: ERROR 10:18:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:18:57 localhost openstack_network_exporter[241668]: Dec 5 05:18:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:57.398 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:18:55Z, description=, device_id=15508ce7-8984-4e34-ae2f-d504ebe80104, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cc3523ba-e956-46f9-9c3f-9250379abfa6, ip_allocation=immediate, mac_address=fa:16:3e:e6:04:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:18:51Z, description=, dns_domain=, id=71edc890-8371-4a0e-b04d-2a1fb14ba216, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-1000320401-network, port_security_enabled=True, project_id=081feba9dde642f4af5966ca072a782c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30955, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3626, status=ACTIVE, subnets=['0048f7e7-17c1-4372-bbff-52c09bf1bb86'], tags=[], tenant_id=081feba9dde642f4af5966ca072a782c, updated_at=2025-12-05T10:18:52Z, vlan_transparent=None, network_id=71edc890-8371-4a0e-b04d-2a1fb14ba216, port_security_enabled=False, project_id=081feba9dde642f4af5966ca072a782c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3633, status=DOWN, tags=[], tenant_id=081feba9dde642f4af5966ca072a782c, updated_at=2025-12-05T10:18:56Z on network 71edc890-8371-4a0e-b04d-2a1fb14ba216#033[00m Dec 5 05:18:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 6.6 KiB/s rd, 94 KiB/s wr, 17 op/s Dec 5 05:18:57 localhost dnsmasq[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/addn_hosts - 1 addresses Dec 5 05:18:57 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/host Dec 5 05:18:57 localhost podman[325750]: 2025-12-05 10:18:57.624231145 +0000 UTC m=+0.063651650 container kill 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 5 05:18:57 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/opts Dec 5 05:18:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "32c34ad9-6951-4024-bed1-03795f491ee7", "force": true, "format": "json"}]: dispatch Dec 5 05:18:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:32c34ad9-6951-4024-bed1-03795f491ee7, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 5 05:18:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:32c34ad9-6951-4024-bed1-03795f491ee7, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 5 05:18:57 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:18:57.871 261902 INFO neutron.agent.dhcp.agent [None req-f8fcc085-6503-40f7-a29f-a43e4d65c078 - - - - - -] DHCP configuration for ports {'cc3523ba-e956-46f9-9c3f-9250379abfa6'} is completed#033[00m Dec 5 05:18:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "d70cb354-430b-4c26-91ce-f33fbada84ef", "mode": "0755", "format": "json"}]: dispatch Dec 5 05:18:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:d70cb354-430b-4c26-91ce-f33fbada84ef, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 5 05:18:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:d70cb354-430b-4c26-91ce-f33fbada84ef, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Dec 5 05:18:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 6.6 KiB/s rd, 94 KiB/s wr, 17 op/s Dec 5 05:18:59 localhost nova_compute[280228]: 2025-12-05 10:18:59.548 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:18:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:18:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:18:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:18:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:18:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:18:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:18:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:18:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:18:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:18:59 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:19:00 localhost podman[325787]: 2025-12-05 10:19:00.822689135 +0000 UTC m=+0.066301851 container kill 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 5 05:19:00 localhost dnsmasq[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/addn_hosts - 0 addresses Dec 5 05:19:00 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/host Dec 5 05:19:00 localhost dnsmasq-dhcp[325638]: read /var/lib/neutron/dhcp/71edc890-8371-4a0e-b04d-2a1fb14ba216/opts Dec 5 05:19:01 localhost kernel: device tap2608db1b-38 left promiscuous mode Dec 5 05:19:01 localhost ovn_controller[153000]: 2025-12-05T10:19:01Z|00404|binding|INFO|Releasing lport 2608db1b-3834-4ea7-bde5-b0e813318db8 from this chassis (sb_readonly=0) Dec 5 05:19:01 localhost ovn_controller[153000]: 2025-12-05T10:19:01Z|00405|binding|INFO|Setting lport 2608db1b-3834-4ea7-bde5-b0e813318db8 down in Southbound Dec 5 05:19:01 localhost nova_compute[280228]: 2025-12-05 10:19:01.063 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:01.074 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-71edc890-8371-4a0e-b04d-2a1fb14ba216', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71edc890-8371-4a0e-b04d-2a1fb14ba216', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '081feba9dde642f4af5966ca072a782c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a129bfcc-383e-41da-b81d-0f9023ba6e2b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2608db1b-3834-4ea7-bde5-b0e813318db8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:19:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:01.076 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 2608db1b-3834-4ea7-bde5-b0e813318db8 in datapath 71edc890-8371-4a0e-b04d-2a1fb14ba216 unbound from our chassis#033[00m Dec 5 05:19:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:01.079 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71edc890-8371-4a0e-b04d-2a1fb14ba216, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:19:01 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:01.080 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[83865896-45fa-47de-9a56-4b20e284232b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:19:01 localhost nova_compute[280228]: 2025-12-05 10:19:01.091 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "d70cb354-430b-4c26-91ce-f33fbada84ef", "force": true, "format": "json"}]: dispatch Dec 5 05:19:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:d70cb354-430b-4c26-91ce-f33fbada84ef, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 5 05:19:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:d70cb354-430b-4c26-91ce-f33fbada84ef, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Dec 5 05:19:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 422 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 15 KiB/s rd, 21 MiB/s wr, 35 op/s Dec 5 05:19:01 localhost nova_compute[280228]: 2025-12-05 10:19:01.634 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:02 localhost ovn_controller[153000]: 2025-12-05T10:19:02Z|00406|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:19:02 localhost nova_compute[280228]: 2025-12-05 10:19:02.176 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:02 localhost podman[325828]: 2025-12-05 10:19:02.921903658 +0000 UTC m=+0.075100141 container kill 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:19:02 localhost dnsmasq[325638]: exiting on receipt of SIGTERM Dec 5 05:19:02 localhost systemd[1]: tmp-crun.MVEExw.mount: Deactivated successfully. Dec 5 05:19:02 localhost systemd[1]: libpod-241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6.scope: Deactivated successfully. Dec 5 05:19:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:19:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:02 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:02 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:02 localhost podman[325843]: 2025-12-05 10:19:02.994167501 +0000 UTC m=+0.055194861 container died 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:19:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6-userdata-shm.mount: Deactivated successfully. Dec 5 05:19:03 localhost systemd[1]: var-lib-containers-storage-overlay-6b42f2408152e6821be10d6b4ff24984891feceb9c063362af40dde1c43ae891-merged.mount: Deactivated successfully. Dec 5 05:19:03 localhost podman[325843]: 2025-12-05 10:19:03.067025162 +0000 UTC m=+0.128052462 container remove 241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71edc890-8371-4a0e-b04d-2a1fb14ba216, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:19:03 localhost systemd[1]: libpod-conmon-241f016d1ecdb3cd5d49a744361c27ada6fd4b80dcc0e4765d11365894e525e6.scope: Deactivated successfully. Dec 5 05:19:03 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:19:03.114 261902 INFO neutron.agent.dhcp.agent [None req-40832a3b-9276-4656-9467-e5631af2ff41 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:19:03 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:19:03.115 261902 INFO neutron.agent.dhcp.agent [None req-40832a3b-9276-4656-9467-e5631af2ff41 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:19:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:03 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 446 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 58 KiB/s rd, 24 MiB/s wr, 99 op/s Dec 5 05:19:03 localhost systemd[1]: run-netns-qdhcp\x2d71edc890\x2d8371\x2d4a0e\x2db04d\x2d2a1fb14ba216.mount: Deactivated successfully. Dec 5 05:19:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:03.924 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:19:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:03.925 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:19:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:03.925 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:19:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:19:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, vol_name:cephfs) < "" Dec 5 05:19:04 localhost nova_compute[280228]: 2025-12-05 10:19:04.552 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fdc84630-5588-464e-88a7-bc4004dd5a8f/.meta.tmp' Dec 5 05:19:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fdc84630-5588-464e-88a7-bc4004dd5a8f/.meta.tmp' to config b'/volumes/_nogroup/fdc84630-5588-464e-88a7-bc4004dd5a8f/.meta' Dec 5 05:19:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, vol_name:cephfs) < "" Dec 5 05:19:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "format": "json"}]: dispatch Dec 5 05:19:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, vol_name:cephfs) < "" Dec 5 05:19:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, vol_name:cephfs) < "" Dec 5 05:19:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:19:04 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:19:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:19:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:19:05 localhost podman[325871]: 2025-12-05 10:19:05.183499573 +0000 UTC m=+0.065615969 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:19:05 localhost podman[325872]: 2025-12-05 10:19:05.255019164 +0000 UTC m=+0.129107824 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:19:05 localhost podman[325871]: 2025-12-05 10:19:05.287304172 +0000 UTC m=+0.169420598 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:19:05 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:19:05 localhost podman[325872]: 2025-12-05 10:19:05.341321866 +0000 UTC m=+0.215410506 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:19:05 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:19:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 446 MiB data, 1.7 GiB used, 40 GiB / 42 GiB avail; 58 KiB/s rd, 24 MiB/s wr, 99 op/s Dec 5 05:19:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:06 localhost nova_compute[280228]: 2025-12-05 10:19:06.636 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "format": "json"}]: dispatch Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:19:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:19:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:19:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:07 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 778 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 87 KiB/s rd, 49 MiB/s wr, 155 op/s Dec 5 05:19:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "format": "json"}]: dispatch Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:19:07.831+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc84630-5588-464e-88a7-bc4004dd5a8f' of type subvolume Dec 5 05:19:07 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc84630-5588-464e-88a7-bc4004dd5a8f' of type subvolume Dec 5 05:19:07 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc84630-5588-464e-88a7-bc4004dd5a8f", "force": true, "format": "json"}]: dispatch Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, vol_name:cephfs) < "" Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fdc84630-5588-464e-88a7-bc4004dd5a8f'' moved to trashcan Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:19:07 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc84630-5588-464e-88a7-bc4004dd5a8f, vol_name:cephfs) < "" Dec 5 05:19:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 778 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 80 KiB/s rd, 47 MiB/s wr, 141 op/s Dec 5 05:19:09 localhost nova_compute[280228]: 2025-12-05 10:19:09.561 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "7813f9e3-09ab-45cd-bf28-79a3478841e1", "format": "json"}]: dispatch Dec 5 05:19:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7813f9e3-09ab-45cd-bf28-79a3478841e1, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7813f9e3-09ab-45cd-bf28-79a3478841e1, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:10 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:19:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:10 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:10 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:19:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, vol_name:cephfs) < "" Dec 5 05:19:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 1.0 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 80 KiB/s rd, 72 MiB/s wr, 149 op/s Dec 5 05:19:11 localhost nova_compute[280228]: 2025-12-05 10:19:11.637 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe/.meta.tmp' Dec 5 05:19:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe/.meta.tmp' to config b'/volumes/_nogroup/ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe/.meta' Dec 5 05:19:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, vol_name:cephfs) < "" Dec 5 05:19:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "format": "json"}]: dispatch Dec 5 05:19:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, vol_name:cephfs) < "" Dec 5 05:19:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, vol_name:cephfs) < "" Dec 5 05:19:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:19:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:19:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:19:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:13 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "1938e467-a994-44d6-8f8e-b2434d6c8af6", "format": "json"}]: dispatch Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1938e467-a994-44d6-8f8e-b2434d6c8af6, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1938e467-a994-44d6-8f8e-b2434d6c8af6, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 110 KiB/s rd, 60 MiB/s wr, 196 op/s Dec 5 05:19:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:13 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "format": "json"}]: dispatch Dec 5 05:19:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:14 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:19:14.384+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe' of type subvolume Dec 5 05:19:14 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe' of type subvolume Dec 5 05:19:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe", "force": true, "format": "json"}]: dispatch Dec 5 05:19:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, vol_name:cephfs) < "" Dec 5 05:19:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe'' moved to trashcan Dec 5 05:19:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:19:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ad9739c8-2ddb-4686-9ea1-404ca0d0d9fe, vol_name:cephfs) < "" Dec 5 05:19:14 localhost nova_compute[280228]: 2025-12-05 10:19:14.566 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:19:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:19:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:19:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:19:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 74 KiB/s rd, 58 MiB/s wr, 140 op/s Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:19:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:19:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:19:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:16 localhost podman[325921]: 2025-12-05 10:19:16.204155571 +0000 UTC m=+0.088599698 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 05:19:16 localhost podman[325921]: 2025-12-05 10:19:16.221691229 +0000 UTC m=+0.106135346 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:19:16 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:19:16 localhost podman[325922]: 2025-12-05 10:19:16.308536363 +0000 UTC m=+0.188852504 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:19:16 localhost podman[325922]: 2025-12-05 10:19:16.325272616 +0000 UTC m=+0.205588777 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9) Dec 5 05:19:16 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:19:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:19:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:16 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:16 localhost nova_compute[280228]: 2025-12-05 10:19:16.639 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "1938e467-a994-44d6-8f8e-b2434d6c8af6_b24687ce-b711-4104-a308-ba8dc02856cd", "force": true, "format": "json"}]: dispatch Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1938e467-a994-44d6-8f8e-b2434d6c8af6_b24687ce-b711-4104-a308-ba8dc02856cd, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1938e467-a994-44d6-8f8e-b2434d6c8af6_b24687ce-b711-4104-a308-ba8dc02856cd, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "1938e467-a994-44d6-8f8e-b2434d6c8af6", "force": true, "format": "json"}]: dispatch Dec 5 05:19:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1938e467-a994-44d6-8f8e-b2434d6c8af6, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:16 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 5 05:19:17 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:17 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1938e467-a994-44d6-8f8e-b2434d6c8af6, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e263 do_prune osdmap full prune enabled Dec 5 05:19:17 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:17 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:17 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e264 e264: 6 total, 6 up, 6 in Dec 5 05:19:17 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e264: 6 total, 6 up, 6 in Dec 5 05:19:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 46 MiB/s wr, 180 op/s Dec 5 05:19:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:19:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:19:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:19:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:19:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:19:18 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:19:18 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 828f18e1-60dd-49d8-8a9c-c02f786771f9 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:19:18 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 828f18e1-60dd-49d8-8a9c-c02f786771f9 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:19:18 localhost ceph-mgr[286454]: [progress INFO root] Completed event 828f18e1-60dd-49d8-8a9c-c02f786771f9 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:19:18 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:19:18 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:19:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e264 do_prune osdmap full prune enabled Dec 5 05:19:19 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:19:19 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:19:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e265 e265: 6 total, 6 up, 6 in Dec 5 05:19:19 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e265: 6 total, 6 up, 6 in Dec 5 05:19:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 21 MiB/s wr, 213 op/s Dec 5 05:19:19 localhost nova_compute[280228]: 2025-12-05 10:19:19.597 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:19:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:19 localhost podman[239519]: time="2025-12-05T10:19:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:19:19 localhost podman[239519]: @ - - [05/Dec/2025:10:19:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:19:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:19:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:19 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:19:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:19:19 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:19:19 localhost podman[239519]: @ - - [05/Dec/2025:10:19:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19272 "" "Go-http-client/1.1" Dec 5 05:19:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:19:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "f992b110-5225-429b-a9ae-546723768646", "format": "json"}]: dispatch Dec 5 05:19:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f992b110-5225-429b-a9ae-546723768646, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f992b110-5225-429b-a9ae-546723768646, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e265 do_prune osdmap full prune enabled Dec 5 05:19:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e266 e266: 6 total, 6 up, 6 in Dec 5 05:19:20 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e266: 6 total, 6 up, 6 in Dec 5 05:19:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:19:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:19:20 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:19:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:19:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:19:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:21 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:19:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 15 MiB/s wr, 169 op/s Dec 5 05:19:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e266 do_prune osdmap full prune enabled Dec 5 05:19:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e267 e267: 6 total, 6 up, 6 in Dec 5 05:19:21 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e267: 6 total, 6 up, 6 in Dec 5 05:19:21 localhost nova_compute[280228]: 2025-12-05 10:19:21.641 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:19:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:19:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:23 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 60 KiB/s wr, 39 op/s Dec 5 05:19:23 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:23 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:23 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "f992b110-5225-429b-a9ae-546723768646_06c08141-b346-463f-8e34-5d1400cfd9b2", "force": true, "format": "json"}]: dispatch Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f992b110-5225-429b-a9ae-546723768646_06c08141-b346-463f-8e34-5d1400cfd9b2, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f992b110-5225-429b-a9ae-546723768646_06c08141-b346-463f-8e34-5d1400cfd9b2, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "f992b110-5225-429b-a9ae-546723768646", "force": true, "format": "json"}]: dispatch Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f992b110-5225-429b-a9ae-546723768646, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:24 localhost nova_compute[280228]: 2025-12-05 10:19:24.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:24 localhost nova_compute[280228]: 2025-12-05 10:19:24.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f992b110-5225-429b-a9ae-546723768646, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:24 localhost nova_compute[280228]: 2025-12-05 10:19:24.647 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e267 do_prune osdmap full prune enabled Dec 5 05:19:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e268 e268: 6 total, 6 up, 6 in Dec 5 05:19:25 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e268: 6 total, 6 up, 6 in Dec 5 05:19:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 60 KiB/s wr, 39 op/s Dec 5 05:19:25 localhost nova_compute[280228]: 2025-12-05 10:19:25.522 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:25 localhost nova_compute[280228]: 2025-12-05 10:19:25.595 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:19:25 localhost nova_compute[280228]: 2025-12-05 10:19:25.596 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:19:25 localhost nova_compute[280228]: 2025-12-05 10:19:25.596 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:19:25 localhost nova_compute[280228]: 2025-12-05 10:19:25.597 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:19:25 localhost nova_compute[280228]: 2025-12-05 10:19:25.597 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:19:26 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2483581321' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.092 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.181 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.181 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e268 do_prune osdmap full prune enabled Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e269 e269: 6 total, 6 up, 6 in Dec 5 05:19:26 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e269: 6 total, 6 up, 6 in Dec 5 05:19:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.454 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.456 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11054MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.457 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.458 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta.tmp' Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta.tmp' to config b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta' Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "format": "json"}]: dispatch Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:19:26 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:19:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:19:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:19:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:19:26 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.644 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:19:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.693 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.693 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.694 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:19:26 localhost nova_compute[280228]: 2025-12-05 10:19:26.801 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:19:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:19:27 localhost openstack_network_exporter[241668]: ERROR 10:19:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:19:27 localhost openstack_network_exporter[241668]: ERROR 10:19:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:19:27 localhost openstack_network_exporter[241668]: ERROR 10:19:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:19:27 localhost openstack_network_exporter[241668]: ERROR 10:19:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:19:27 localhost openstack_network_exporter[241668]: Dec 5 05:19:27 localhost openstack_network_exporter[241668]: ERROR 10:19:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:19:27 localhost openstack_network_exporter[241668]: Dec 5 05:19:27 localhost podman[326095]: 2025-12-05 10:19:27.267441342 +0000 UTC m=+0.130355210 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:19:27 localhost podman[326094]: 2025-12-05 10:19:27.248867292 +0000 UTC m=+0.122733246 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:19:27 localhost podman[326093]: 2025-12-05 10:19:27.305824048 +0000 UTC m=+0.182277291 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:19:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:19:27 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4057956785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:19:27 localhost podman[326095]: 2025-12-05 10:19:27.325051468 +0000 UTC m=+0.187965336 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:19:27 localhost nova_compute[280228]: 2025-12-05 10:19:27.326 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:19:27 localhost podman[326094]: 2025-12-05 10:19:27.33260959 +0000 UTC m=+0.206475584 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 5 05:19:27 localhost nova_compute[280228]: 2025-12-05 10:19:27.333 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:19:27 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:19:27 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:19:27 localhost podman[326093]: 2025-12-05 10:19:27.343968248 +0000 UTC m=+0.220421461 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:19:27 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:19:27 localhost nova_compute[280228]: 2025-12-05 10:19:27.360 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:19:27 localhost nova_compute[280228]: 2025-12-05 10:19:27.362 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:19:27 localhost nova_compute[280228]: 2025-12-05 10:19:27.362 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:19:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 92 KiB/s wr, 77 op/s Dec 5 05:19:27 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "d750bd56-72a3-4675-bc8c-db3ac4a6da80", "format": "json"}]: dispatch Dec 5 05:19:27 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d750bd56-72a3-4675-bc8c-db3ac4a6da80, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:27 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d750bd56-72a3-4675-bc8c-db3ac4a6da80, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:28 localhost nova_compute[280228]: 2025-12-05 10:19:28.347 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:28 localhost nova_compute[280228]: 2025-12-05 10:19:28.502 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e269 do_prune osdmap full prune enabled Dec 5 05:19:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e270 e270: 6 total, 6 up, 6 in Dec 5 05:19:28 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e270: 6 total, 6 up, 6 in Dec 5 05:19:28 localhost nova_compute[280228]: 2025-12-05 10:19:28.822 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 212 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 76 KiB/s wr, 50 op/s Dec 5 05:19:29 localhost nova_compute[280228]: 2025-12-05 10:19:29.653 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "snap_name": "306fbfec-6fb1-478f-9901-cfcf09cbfd8b", "format": "json"}]: dispatch Dec 5 05:19:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:306fbfec-6fb1-478f-9901-cfcf09cbfd8b, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:306fbfec-6fb1-478f-9901-cfcf09cbfd8b, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e270 do_prune osdmap full prune enabled Dec 5 05:19:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e271 e271: 6 total, 6 up, 6 in Dec 5 05:19:29 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e271: 6 total, 6 up, 6 in Dec 5 05:19:29 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:19:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:19:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:29 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:29 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:29 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.603 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.603 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.603 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:19:30 localhost nova_compute[280228]: 2025-12-05 10:19:30.604 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:19:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.017 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.032 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.032 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:19:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "d750bd56-72a3-4675-bc8c-db3ac4a6da80_2a061474-d04d-4e89-8efe-a6fde2e36128", "force": true, "format": "json"}]: dispatch Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d750bd56-72a3-4675-bc8c-db3ac4a6da80_2a061474-d04d-4e89-8efe-a6fde2e36128, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e271 do_prune osdmap full prune enabled Dec 5 05:19:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e272 e272: 6 total, 6 up, 6 in Dec 5 05:19:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e272: 6 total, 6 up, 6 in Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d750bd56-72a3-4675-bc8c-db3ac4a6da80_2a061474-d04d-4e89-8efe-a6fde2e36128, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "d750bd56-72a3-4675-bc8c-db3ac4a6da80", "force": true, "format": "json"}]: dispatch Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d750bd56-72a3-4675-bc8c-db3ac4a6da80, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d750bd56-72a3-4675-bc8c-db3ac4a6da80, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 05:19:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 226 KiB/s wr, 122 op/s Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.526 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 05:19:31 localhost nova_compute[280228]: 2025-12-05 10:19:31.646 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e272 do_prune osdmap full prune enabled Dec 5 05:19:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e273 e273: 6 total, 6 up, 6 in Dec 5 05:19:32 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e273: 6 total, 6 up, 6 in Dec 5 05:19:32 localhost nova_compute[280228]: 2025-12-05 10:19:32.521 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:32 localhost nova_compute[280228]: 2025-12-05 10:19:32.522 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:19:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:19:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:19:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:33 localhost ovn_controller[153000]: 2025-12-05T10:19:33Z|00407|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory Dec 5 05:19:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:19:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:19:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e273 do_prune osdmap full prune enabled Dec 5 05:19:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e274 e274: 6 total, 6 up, 6 in Dec 5 05:19:33 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e274: 6 total, 6 up, 6 in Dec 5 05:19:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "snap_name": "306fbfec-6fb1-478f-9901-cfcf09cbfd8b_fe3defdc-ce29-4b96-bb35-17832d73b057", "force": true, "format": "json"}]: dispatch Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:306fbfec-6fb1-478f-9901-cfcf09cbfd8b_fe3defdc-ce29-4b96-bb35-17832d73b057, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta.tmp' Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta.tmp' to config b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta' Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:306fbfec-6fb1-478f-9901-cfcf09cbfd8b_fe3defdc-ce29-4b96-bb35-17832d73b057, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "snap_name": "306fbfec-6fb1-478f-9901-cfcf09cbfd8b", "force": true, "format": "json"}]: dispatch Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:306fbfec-6fb1-478f-9901-cfcf09cbfd8b, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:33 localhost nova_compute[280228]: 2025-12-05 10:19:33.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:33 localhost nova_compute[280228]: 2025-12-05 10:19:33.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:33 localhost nova_compute[280228]: 2025-12-05 10:19:33.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:19:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 188 KiB/s wr, 152 op/s Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta.tmp' Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta.tmp' to config b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1/.meta' Dec 5 05:19:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:306fbfec-6fb1-478f-9901-cfcf09cbfd8b, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:34 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "84a64188-bba6-4439-8dec-7b4368042935", "format": "json"}]: dispatch Dec 5 05:19:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:84a64188-bba6-4439-8dec-7b4368042935, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:34 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:84a64188-bba6-4439-8dec-7b4368042935, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:34 localhost nova_compute[280228]: 2025-12-05 10:19:34.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:34 localhost nova_compute[280228]: 2025-12-05 10:19:34.684 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 129 KiB/s wr, 105 op/s Dec 5 05:19:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:19:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:19:36 localhost podman[326151]: 2025-12-05 10:19:36.205341273 +0000 UTC m=+0.088226967 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 5 05:19:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e274 do_prune osdmap full prune enabled Dec 5 05:19:36 localhost podman[326151]: 2025-12-05 10:19:36.247685731 +0000 UTC m=+0.130571405 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:19:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e275 e275: 6 total, 6 up, 6 in Dec 5 05:19:36 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e275: 6 total, 6 up, 6 in Dec 5 05:19:36 localhost systemd[1]: tmp-crun.weqoOZ.mount: Deactivated successfully. Dec 5 05:19:36 localhost podman[326152]: 2025-12-05 10:19:36.270037957 +0000 UTC m=+0.152105536 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:19:36 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:19:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:19:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:36 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:36 localhost podman[326152]: 2025-12-05 10:19:36.311806128 +0000 UTC m=+0.193873657 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:19:36 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:19:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:36 localhost nova_compute[280228]: 2025-12-05 10:19:36.521 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:36 localhost nova_compute[280228]: 2025-12-05 10:19:36.647 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "format": "json"}]: dispatch Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:042f226a-7d65-4458-ae20-1eeac30581e1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:042f226a-7d65-4458-ae20-1eeac30581e1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:36 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:19:36.734+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '042f226a-7d65-4458-ae20-1eeac30581e1' of type subvolume Dec 5 05:19:36 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '042f226a-7d65-4458-ae20-1eeac30581e1' of type subvolume Dec 5 05:19:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "042f226a-7d65-4458-ae20-1eeac30581e1", "force": true, "format": "json"}]: dispatch Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/042f226a-7d65-4458-ae20-1eeac30581e1'' moved to trashcan Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:19:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:042f226a-7d65-4458-ae20-1eeac30581e1, vol_name:cephfs) < "" Dec 5 05:19:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e275 do_prune osdmap full prune enabled Dec 5 05:19:37 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e276 e276: 6 total, 6 up, 6 in Dec 5 05:19:37 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e276: 6 total, 6 up, 6 in Dec 5 05:19:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 1 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 170 active+clean; 213 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 100 KiB/s wr, 110 op/s Dec 5 05:19:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "84a64188-bba6-4439-8dec-7b4368042935_04f749f5-630f-40c9-aae2-87b186aa8f80", "force": true, "format": "json"}]: dispatch Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:84a64188-bba6-4439-8dec-7b4368042935_04f749f5-630f-40c9-aae2-87b186aa8f80, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:84a64188-bba6-4439-8dec-7b4368042935_04f749f5-630f-40c9-aae2-87b186aa8f80, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "84a64188-bba6-4439-8dec-7b4368042935", "force": true, "format": "json"}]: dispatch Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:84a64188-bba6-4439-8dec-7b4368042935, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:84a64188-bba6-4439-8dec-7b4368042935, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:39.280 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:19:39 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:39.281 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:19:39 localhost nova_compute[280228]: 2025-12-05 10:19:39.327 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 1 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 170 active+clean; 213 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 84 KiB/s wr, 93 op/s Dec 5 05:19:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:19:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Dec 5 05:19:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:39 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Dec 5 05:19:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:19:39 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:19:39 localhost nova_compute[280228]: 2025-12-05 10:19:39.687 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:39 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice", "format": "json"}]: dispatch Dec 5 05:19:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:39 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:39 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 5 05:19:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 5 05:19:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 5 05:19:40 localhost ovn_controller[153000]: 2025-12-05T10:19:40Z|00408|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:19:41 localhost nova_compute[280228]: 2025-12-05 10:19:41.001 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e276 do_prune osdmap full prune enabled Dec 5 05:19:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e277 e277: 6 total, 6 up, 6 in Dec 5 05:19:41 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e277: 6 total, 6 up, 6 in Dec 5 05:19:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:19:41.284 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:19:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 202 KiB/s wr, 139 op/s Dec 5 05:19:41 localhost nova_compute[280228]: 2025-12-05 10:19:41.694 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:42 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e277 do_prune osdmap full prune enabled Dec 5 05:19:42 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e278 e278: 6 total, 6 up, 6 in Dec 5 05:19:42 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e278: 6 total, 6 up, 6 in Dec 5 05:19:42 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "47f251f3-cf59-49da-9646-8d4921af4c7f", "format": "json"}]: dispatch Dec 5 05:19:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:47f251f3-cf59-49da-9646-8d4921af4c7f, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:47f251f3-cf59-49da-9646-8d4921af4c7f, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:42 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:19:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:42 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:42 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:43 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:43 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:43 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:43 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 111 KiB/s wr, 62 op/s Dec 5 05:19:44 localhost nova_compute[280228]: 2025-12-05 10:19:44.690 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:45 localhost nova_compute[280228]: 2025-12-05 10:19:45.127 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:19:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:19:45 Dec 5 05:19:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:19:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:19:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['backups', '.mgr', 'volumes', 'images', 'manila_metadata', 'vms', 'manila_data'] Dec 5 05:19:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:19:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:19:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:19:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:19:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:19:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 214 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 87 KiB/s wr, 48 op/s Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32) Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:19:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0015678980841708542 of space, bias 4.0, pg target 1.2480468749999998 quantized to 16 (current 16) Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:19:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:19:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e278 do_prune osdmap full prune enabled Dec 5 05:19:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e279 e279: 6 total, 6 up, 6 in Dec 5 05:19:46 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e279: 6 total, 6 up, 6 in Dec 5 05:19:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:19:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:46 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:46 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:46 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:46 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:46 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:46 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:46 localhost nova_compute[280228]: 2025-12-05 10:19:46.726 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:19:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:19:47 localhost podman[326201]: 2025-12-05 10:19:47.190819458 +0000 UTC m=+0.078201630 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:19:47 localhost podman[326201]: 2025-12-05 10:19:47.203662231 +0000 UTC m=+0.091044363 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd) Dec 5 05:19:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "47f251f3-cf59-49da-9646-8d4921af4c7f_8cad2d4c-611e-441b-8900-64acdbf02566", "force": true, "format": "json"}]: dispatch Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:47f251f3-cf59-49da-9646-8d4921af4c7f_8cad2d4c-611e-441b-8900-64acdbf02566, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:47 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:19:47 localhost podman[326202]: 2025-12-05 10:19:47.252424577 +0000 UTC m=+0.135776396 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350) Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:47f251f3-cf59-49da-9646-8d4921af4c7f_8cad2d4c-611e-441b-8900-64acdbf02566, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:47 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "47f251f3-cf59-49da-9646-8d4921af4c7f", "force": true, "format": "json"}]: dispatch Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:47f251f3-cf59-49da-9646-8d4921af4c7f, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:47 localhost podman[326202]: 2025-12-05 10:19:47.290714421 +0000 UTC m=+0.174066220 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350) Dec 5 05:19:47 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:47 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:47f251f3-cf59-49da-9646-8d4921af4c7f, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 209 KiB/s wr, 68 op/s Dec 5 05:19:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:19:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, vol_name:cephfs) < "" Dec 5 05:19:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3018c34f-51ef-49c2-9a51-3d014bf9195b/.meta.tmp' Dec 5 05:19:48 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3018c34f-51ef-49c2-9a51-3d014bf9195b/.meta.tmp' to config b'/volumes/_nogroup/3018c34f-51ef-49c2-9a51-3d014bf9195b/.meta' Dec 5 05:19:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, vol_name:cephfs) < "" Dec 5 05:19:48 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "format": "json"}]: dispatch Dec 5 05:19:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, vol_name:cephfs) < "" Dec 5 05:19:48 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, vol_name:cephfs) < "" Dec 5 05:19:48 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:19:48 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:19:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:19:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:49 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice_bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:49 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:49 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 78 KiB/s wr, 22 op/s Dec 5 05:19:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:49 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:49 localhost nova_compute[280228]: 2025-12-05 10:19:49.693 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:49 localhost podman[239519]: time="2025-12-05T10:19:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:19:49 localhost podman[239519]: @ - - [05/Dec/2025:10:19:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:19:49 localhost podman[239519]: @ - - [05/Dec/2025:10:19:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19279 "" "Go-http-client/1.1" Dec 5 05:19:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e279 do_prune osdmap full prune enabled Dec 5 05:19:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e280 e280: 6 total, 6 up, 6 in Dec 5 05:19:51 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e280: 6 total, 6 up, 6 in Dec 5 05:19:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 155 KiB/s wr, 12 op/s Dec 5 05:19:51 localhost nova_compute[280228]: 2025-12-05 10:19:51.727 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "7813f9e3-09ab-45cd-bf28-79a3478841e1_4cfd1cec-b884-48b2-9066-449bf106ef78", "force": true, "format": "json"}]: dispatch Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7813f9e3-09ab-45cd-bf28-79a3478841e1_4cfd1cec-b884-48b2-9066-449bf106ef78, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e280 do_prune osdmap full prune enabled Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7813f9e3-09ab-45cd-bf28-79a3478841e1_4cfd1cec-b884-48b2-9066-449bf106ef78, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "snap_name": "7813f9e3-09ab-45cd-bf28-79a3478841e1", "force": true, "format": "json"}]: dispatch Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7813f9e3-09ab-45cd-bf28-79a3478841e1, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e281 e281: 6 total, 6 up, 6 in Dec 5 05:19:52 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e281: 6 total, 6 up, 6 in Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta.tmp' to config b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f/.meta' Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7813f9e3-09ab-45cd-bf28-79a3478841e1, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "format": "json"}]: dispatch Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:19:52.611+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3018c34f-51ef-49c2-9a51-3d014bf9195b' of type subvolume Dec 5 05:19:52 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3018c34f-51ef-49c2-9a51-3d014bf9195b' of type subvolume Dec 5 05:19:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3018c34f-51ef-49c2-9a51-3d014bf9195b", "force": true, "format": "json"}]: dispatch Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3018c34f-51ef-49c2-9a51-3d014bf9195b'' moved to trashcan Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3018c34f-51ef-49c2-9a51-3d014bf9195b, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Dec 5 05:19:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:52 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Dec 5 05:19:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:52 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice_bob", "format": "json"}]: dispatch Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 5 05:19:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 5 05:19:53 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 5 05:19:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 562 B/s rd, 170 KiB/s wr, 13 op/s Dec 5 05:19:54 localhost nova_compute[280228]: 2025-12-05 10:19:54.697 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "format": "json"}]: dispatch Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 215 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 77 KiB/s wr, 6 op/s Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:19:55 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:19:55.531+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af2bc13b-de88-4ec9-adbc-16dabcdb441f' of type subvolume Dec 5 05:19:55 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af2bc13b-de88-4ec9-adbc-16dabcdb441f' of type subvolume Dec 5 05:19:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af2bc13b-de88-4ec9-adbc-16dabcdb441f", "force": true, "format": "json"}]: dispatch Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af2bc13b-de88-4ec9-adbc-16dabcdb441f'' moved to trashcan Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af2bc13b-de88-4ec9-adbc-16dabcdb441f, vol_name:cephfs) < "" Dec 5 05:19:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:19:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:55 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:19:55 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:19:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:55 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:19:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:19:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:19:56 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:19:56 localhost nova_compute[280228]: 2025-12-05 10:19:56.778 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:57 localhost openstack_network_exporter[241668]: ERROR 10:19:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:19:57 localhost openstack_network_exporter[241668]: ERROR 10:19:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:19:57 localhost openstack_network_exporter[241668]: ERROR 10:19:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:19:57 localhost openstack_network_exporter[241668]: ERROR 10:19:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:19:57 localhost openstack_network_exporter[241668]: Dec 5 05:19:57 localhost openstack_network_exporter[241668]: ERROR 10:19:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:19:57 localhost openstack_network_exporter[241668]: Dec 5 05:19:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e281 do_prune osdmap full prune enabled Dec 5 05:19:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e282 e282: 6 total, 6 up, 6 in Dec 5 05:19:57 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e282: 6 total, 6 up, 6 in Dec 5 05:19:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 982 B/s rd, 236 KiB/s wr, 17 op/s Dec 5 05:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:19:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:19:58 localhost podman[326240]: 2025-12-05 10:19:58.198148191 +0000 UTC m=+0.083569674 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:19:58 localhost podman[326242]: 2025-12-05 10:19:58.261960129 +0000 UTC m=+0.138762018 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 5 05:19:58 localhost podman[326242]: 2025-12-05 10:19:58.270771929 +0000 UTC m=+0.147573798 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:19:58 localhost podman[326240]: 2025-12-05 10:19:58.28255058 +0000 UTC m=+0.167972073 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:19:58 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:19:58 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:19:58 localhost podman[326241]: 2025-12-05 10:19:58.377230824 +0000 UTC m=+0.256409846 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:19:58 localhost podman[326241]: 2025-12-05 10:19:58.41492808 +0000 UTC m=+0.294107062 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent) Dec 5 05:19:58 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:19:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 108 KiB/s wr, 8 op/s Dec 5 05:19:59 localhost nova_compute[280228]: 2025-12-05 10:19:59.700 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:19:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:19:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:19:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:19:59 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:19:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:19:59 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:19:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:19:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:19:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:19:59 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:19:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:00 localhost ceph-mon[292820]: log_channel(cluster) log [INF] : overall HEALTH_OK Dec 5 05:20:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:20:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:20:00 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:20:00 localhost ceph-mon[292820]: overall HEALTH_OK Dec 5 05:20:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e282 do_prune osdmap full prune enabled Dec 5 05:20:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e283 e283: 6 total, 6 up, 6 in Dec 5 05:20:01 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e283: 6 total, 6 up, 6 in Dec 5 05:20:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 151 KiB/s wr, 11 op/s Dec 5 05:20:01 localhost nova_compute[280228]: 2025-12-05 10:20:01.825 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:03 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "r", "format": "json"}]: dispatch Dec 5 05:20:03 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:20:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:20:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:20:03 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID alice bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:20:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:20:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:20:03 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:20:03 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:20:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 151 KiB/s wr, 12 op/s Dec 5 05:20:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:20:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:20:03 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow r pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:20:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:20:03.925 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:20:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:20:03.926 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:20:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:20:03.926 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:20:04 localhost nova_compute[280228]: 2025-12-05 10:20:04.703 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 216 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 372 B/s rd, 43 KiB/s wr, 4 op/s Dec 5 05:20:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e283 do_prune osdmap full prune enabled Dec 5 05:20:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 e284: 6 total, 6 up, 6 in Dec 5 05:20:06 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e284: 6 total, 6 up, 6 in Dec 5 05:20:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:20:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Dec 5 05:20:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:20:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Dec 5 05:20:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:20:06 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:20:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "alice bob", "format": "json"}]: dispatch Dec 5 05:20:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:20:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:20:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:06 localhost nova_compute[280228]: 2025-12-05 10:20:06.827 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:20:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:20:07 localhost podman[326306]: 2025-12-05 10:20:07.191636355 +0000 UTC m=+0.075362322 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:20:07 localhost podman[326306]: 2025-12-05 10:20:07.262805209 +0000 UTC m=+0.146531186 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 5 05:20:07 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:20:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 5 05:20:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 5 05:20:07 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 5 05:20:07 localhost podman[326307]: 2025-12-05 10:20:07.263380226 +0000 UTC m=+0.146631059 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:20:07 localhost podman[326307]: 2025-12-05 10:20:07.345537316 +0000 UTC m=+0.228788139 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:20:07 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:20:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 94 KiB/s wr, 7 op/s Dec 5 05:20:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 495 B/s rd, 91 KiB/s wr, 7 op/s Dec 5 05:20:09 localhost nova_compute[280228]: 2025-12-05 10:20:09.706 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:20:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:20:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 5 05:20:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: Creating meta for ID bob with tenant 362b693fa42f4124be6d6249e2b9052d Dec 5 05:20:09 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} v 0) Dec 5 05:20:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:20:09 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:20:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:20:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"} : dispatch Dec 5 05:20:10 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363", "mon", "allow r"], "format": "json"}]': finished Dec 5 05:20:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 93 KiB/s wr, 6 op/s Dec 5 05:20:11 localhost ovn_controller[153000]: 2025-12-05T10:20:11Z|00409|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory Dec 5 05:20:11 localhost nova_compute[280228]: 2025-12-05 10:20:11.829 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.956 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.957 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.961 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fda5f64b-fa0f-4293-8558-fc4daedc3bf0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:12.957312', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc43cab8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': 'cb5a6c0637273084c28c70465e1cb50fe6d619aa89e639a88012fb0714ce770c'}]}, 'timestamp': '2025-12-05 10:20:12.961802', '_unique_id': '6cb09743135a457292ba934f00257992'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.963 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.964 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.994 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.994 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c3d7bb81-0d31-42b9-ba39-c012a5bd5369', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:12.964932', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc48d77e-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '1785532d2f70282ec31289951f05052e43798d98d8c8c2e2f221f36eee77b26a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:12.964932', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc48edae-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '9843975c85d2462c4801ab2813c706fcabc55c7bbba6fe7bd3c0ebc7da4d2f36'}]}, 'timestamp': '2025-12-05 10:20:12.995426', '_unique_id': '078d8c2db0d847e3b4ca29c80a315c6e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.996 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.997 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.997 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cf08e04a-8b27-42e7-85b8-ccb65838ee83', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:12.997411', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc494dbc-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': '15a7071517b25f3299ce23513d5c459a63d3ce3a4ef7c8ff6eb5394d1618582d'}]}, 'timestamp': '2025-12-05 10:20:12.997731', '_unique_id': '9b55402d409f4572949be7401a536f2d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.998 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.999 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.999 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:12.999 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7d95b76c-b7e0-4b68-a4c9-122db114b2a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:12.999199', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc499416-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '432714f4ca8a26c8a4e2627ac00ca23c1a139408c72634793cc794ff9260592d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:12.999199', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc499ef2-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': 'fd42b30d6bd98f2fc9505ddba3d07559bf90d24f7570f548ef193e19676aebd1'}]}, 'timestamp': '2025-12-05 10:20:12.999794', '_unique_id': '639b6a2047364577981e2dc2d450c563'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.000 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.001 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.001 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '694230c9-2a78-48f7-b8aa-5dda16518887', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.001263', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc49e40c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': 'c0a53ecebc2ff857d33b2a1f10e16ffb8b92aa8d65d9b694d0793605a96f7438'}]}, 'timestamp': '2025-12-05 10:20:13.001575', '_unique_id': '8969dcdce2b84401a6dd0719178319ec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.002 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.017 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bdf686c4-ea6e-4437-a100-4c16fc1461d5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:20:13.003045', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'fc4c61be-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.191773132, 'message_signature': '1b7af5c802b18df61fe0b4da54ff5372da4ebe9d3e9d781e3b3e772fcae0f7db'}]}, 'timestamp': '2025-12-05 10:20:13.017906', '_unique_id': 'eb605ef42c324bfc8ce61da3c9b80263'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.018 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.020 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6287d8bf-a0a3-4b12-962b-87768627d9c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.020365', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc4cd6da-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': 'fb74a43480e5592b3c76e46787ba96b13a897c4347e01260645422ce01b7274b'}]}, 'timestamp': '2025-12-05 10:20:13.021122', '_unique_id': '27eb5a0e7fde4862b356719dd70c034b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.024 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.024 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.025 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '197a24ee-6926-4bd5-b821-3f968627177a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.024535', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc4d793c-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': 'd8415bafd017e27d1a02d789137c2d78a5004a36b43edcaf98275ecadb8da2ab'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.024535', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc4d94a8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': 'e75e9640580f901abeb56c0fbffdbb58644b59f960f68666517db34d677af64c'}]}, 'timestamp': '2025-12-05 10:20:13.025940', '_unique_id': '537f1bba413f48ef87c4d2454bc1e854'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.027 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.029 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.029 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a66ca7c5-42f8-4154-99a3-65540eb76c0a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.029443', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc4e38cc-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '90459346b1bcf1035c5f5039e0d71c968f8fa57677196997f5dae325136a8bbe'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.029443', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc4e5582-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '958337e7d578458d7d2f47206e33baf7e77721b67e00e317cc4d20e15b79795d'}]}, 'timestamp': '2025-12-05 10:20:13.030875', '_unique_id': 'cf341ee49252490390ff92696b191237'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.032 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.034 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.034 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd84e6fb-6ca5-40db-a03b-9ef4291e3213', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.034366', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc4ef9ec-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': '0742fee2a3a7c0a1addf5706ad70ff7477b4f3039837cc8a39890908f8ce560f'}]}, 'timestamp': '2025-12-05 10:20:13.035141', '_unique_id': '6793dc49648848bd8ed721d7358e05d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.036 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.038 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.038 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '577026dc-f372-41cd-90e1-3ec11685def6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.038543', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc4f9cb2-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': '51e1a5d52c72843c48386fb3f8d1a1a6a2f146816aab3784498b12e77928a34e'}]}, 'timestamp': '2025-12-05 10:20:13.039327', '_unique_id': '1e114c406de64852af33e61b2b1275e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.040 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.042 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.042 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.043 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '833138e1-8d85-48a5-b124-753f8f9bcd2f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.043011', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc504b26-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': 'd91392ce5f45595f4dc1383630b31369a3baaa05a33c690907511f19ef3700c4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.043011', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc506336-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '64ed595173562535b28f57b680bb7c73caf4b4102e7af031398463fa04971f6e'}]}, 'timestamp': '2025-12-05 10:20:13.044342', '_unique_id': 'd34861c3ff714e17a512e46a935add68'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.045 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aeaf3e2f-c5d0-4d8d-953c-06543f7e115a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.047565', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc50fb70-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': '50c6ab193e2eb81135a64f690d9742d10c2dfb27dd22980cf95d2261ec7d86cf'}]}, 'timestamp': '2025-12-05 10:20:13.048238', '_unique_id': 'e984581f095c4e7aac46ccc5f1fad29f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.050 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.051 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '26c8da3a-ccd4-4d50-b998-6cc4305d21aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.051088', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc5181e4-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': 'a7b2ca21c57aaf9a51014079cf0e3415a8953af3c56ab523764a3330b4af6b72'}]}, 'timestamp': '2025-12-05 10:20:13.051571', '_unique_id': '20f4fcfa176a4f598504b4f1a47197eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.054 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.054 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 20250000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '48540660-87e5-4b96-822b-179df91bbedc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20250000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:20:13.054520', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'fc520dd0-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.191773132, 'message_signature': '384aae1692882af770e7a0582e346b11d4823fae96571fb537693a06e78626d8'}]}, 'timestamp': '2025-12-05 10:20:13.055140', '_unique_id': '43c220ce185a440dbb413e625929c4b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.056 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3b2c976a-88e2-4a02-91c7-97bfdee20e51', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.057432', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc527888-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': 'da7d66a1553f6276ec6a1fe917bc96d625edc7cb99a64c7dd71f66ad67c715da'}]}, 'timestamp': '2025-12-05 10:20:13.057879', '_unique_id': 'fb75223a630845a1bacbe005ef727611'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.059 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.059 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '76a7745b-c9a6-46f3-9c55-3d58cb1b8d37', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.059922', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc52d986-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '29b6f90e3696982fb118e1c0682d92be95da0e2afd5c1a28376fcbe81ef87642'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.059922', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc52ea7a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.139267031, 'message_signature': '75a0d3bc4e259a54d27737b216c0ab94ebbd6011f3592fe340bc8201eeb402b2'}]}, 'timestamp': '2025-12-05 10:20:13.060775', '_unique_id': 'e45adc4979aa4129a46976a4feb959e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.061 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.062 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0039c69c-a588-46ce-8128-be2befa81d17', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:20:13.062838', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'fc534b96-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.131630377, 'message_signature': 'c2c96a8da5116e44a785aee6a9d120a875c1e751de1f79295e61d9c9e1e73d31'}]}, 'timestamp': '2025-12-05 10:20:13.063315', '_unique_id': 'adbb3b197af94adb9163649b13245d02'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.065 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.075 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.075 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aeedf010-63a2-457c-845d-60ae3e225e59', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.065403', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc552920-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.239725562, 'message_signature': '9e960e058e252e45f48bf8e19627a19005339e5bbaf78f8c8efdbb2f00d1f4b1'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.065403', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc553596-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.239725562, 'message_signature': 'e0a34c95b96221f5643f992f9a4175f1e6f5cd052427f7ea09d96d9719211214'}]}, 'timestamp': '2025-12-05 10:20:13.075739', '_unique_id': '7b4df33535b74582890d9bc296135d18'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.077 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.077 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.077 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c44e0951-7250-4776-870c-48152c918cec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.077395', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc5581ae-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.239725562, 'message_signature': 'fc74122d1bd1847c499d5f8f794e14c10be355a5edbfd933c29afcb87d206544'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.077395', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc558c76-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.239725562, 'message_signature': 'ab41c66637b560f5d4c7812ca894879e8b8f5c46a111ec93b13b1af3fc11f669'}]}, 'timestamp': '2025-12-05 10:20:13.077959', '_unique_id': '368fc3c519a649dfbf90577df015757a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.079 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.079 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.079 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.079 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bba66381-545f-42e1-bb05-49465e170756', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:20:13.079535', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fc55d4d8-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.239725562, 'message_signature': '132a7a5e13ad7b195f494868ad68b5fde8f0fe1f6095abe06ed60e7f68a9fb6f'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:20:13.079535', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fc55df5a-d1c3-11f0-8ba6-fa163e982365', 'monotonic_time': 13128.239725562, 'message_signature': '17b883abbbba89d018abaa5cb279d818d550270c3c6ce2d9cde958537dcdfdc1'}]}, 'timestamp': '2025-12-05 10:20:13.080080', '_unique_id': 'aea7638df6c249da8276df12c0906340'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:20:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:20:13.080 12 ERROR oslo_messaging.notify.messaging Dec 5 05:20:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 93 KiB/s wr, 6 op/s Dec 5 05:20:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:20:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/.meta.tmp' Dec 5 05:20:14 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/.meta.tmp' to config b'/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/.meta' Dec 5 05:20:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "format": "json"}]: dispatch Dec 5 05:20:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:14 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:20:14 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:20:14 localhost nova_compute[280228]: 2025-12-05 10:20:14.708 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:20:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:20:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:20:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:20:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 93 KiB/s wr, 6 op/s Dec 5 05:20:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:20:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:20:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:16 localhost nova_compute[280228]: 2025-12-05 10:20:16.874 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:17 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "auth_id": "bob", "tenant_id": "362b693fa42f4124be6d6249e2b9052d", "access_level": "rw", "format": "json"}]: dispatch Dec 5 05:20:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:20:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 5 05:20:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:17 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]} v 0) Dec 5 05:20:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]} : dispatch Dec 5 05:20:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]}]': finished Dec 5 05:20:17 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 5 05:20:17 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 181 B/s rd, 93 KiB/s wr, 7 op/s Dec 5 05:20:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, tenant_id:362b693fa42f4124be6d6249e2b9052d, vol_name:cephfs) < "" Dec 5 05:20:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:20:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:20:18 localhost systemd[1]: tmp-crun.1mUFPN.mount: Deactivated successfully. Dec 5 05:20:18 localhost podman[326354]: 2025-12-05 10:20:18.214780163 +0000 UTC m=+0.094588732 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS) Dec 5 05:20:18 localhost podman[326354]: 2025-12-05 10:20:18.256897835 +0000 UTC m=+0.136706444 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 5 05:20:18 localhost podman[326355]: 2025-12-05 10:20:18.271496723 +0000 UTC m=+0.147785733 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, name=ubi9-minimal, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Dec 5 05:20:18 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:20:18 localhost podman[326355]: 2025-12-05 10:20:18.288617338 +0000 UTC m=+0.164906328 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 5 05:20:18 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:20:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]} : dispatch Dec 5 05:20:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea,allow rw path=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363,allow rw pool=manila_data namespace=fsvolumens_6fece2d2-49a0-4615-93c1-5c0350069189"]}]': finished Dec 5 05:20:18 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 54 KiB/s wr, 4 op/s Dec 5 05:20:19 localhost nova_compute[280228]: 2025-12-05 10:20:19.712 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:19 localhost podman[239519]: time="2025-12-05T10:20:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:20:19 localhost podman[239519]: @ - - [05/Dec/2025:10:20:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:20:19 localhost podman[239519]: @ - - [05/Dec/2025:10:20:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19277 "" "Go-http-client/1.1" Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:20:20 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 7820b472-2d9b-422d-93d9-ed702d208229 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:20:20 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 7820b472-2d9b-422d-93d9-ed702d208229 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:20:20 localhost ceph-mgr[286454]: [progress INFO root] Completed event 7820b472-2d9b-422d-93d9-ed702d208229 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:20:20 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:20:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:20:20 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:20:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "auth_id": "bob", "format": "json"}]: dispatch Dec 5 05:20:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]} v 0) Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]} : dispatch Dec 5 05:20:20 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]}]': finished Dec 5 05:20:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "auth_id": "bob", "format": "json"}]: dispatch Dec 5 05:20:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189/53340ae9-2988-4b83-be04-2373cdf8bc7b Dec 5 05:20:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:20:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 101 KiB/s wr, 7 op/s Dec 5 05:20:21 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:20:21 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:21 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]} : dispatch Dec 5 05:20:21 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea", "osd", "allow rw pool=manila_data namespace=fsvolumens_66fc337c-1267-4a35-81f5-115366d33363"]}]': finished Dec 5 05:20:21 localhost nova_compute[280228]: 2025-12-05 10:20:21.875 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 58 KiB/s wr, 4 op/s Dec 5 05:20:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "bob", "format": "json"}]: dispatch Dec 5 05:20:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Dec 5 05:20:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Dec 5 05:20:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 5 05:20:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Dec 5 05:20:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "auth_id": "bob", "format": "json"}]: dispatch Dec 5 05:20:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363/db7eff2c-8967-4da1-aed8-f4999cdf5aea Dec 5 05:20:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Dec 5 05:20:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 5 05:20:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 5 05:20:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Dec 5 05:20:24 localhost nova_compute[280228]: 2025-12-05 10:20:24.720 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 58 KiB/s wr, 4 op/s Dec 5 05:20:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.529 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.588 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.588 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.589 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.589 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.590 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:20:26 localhost nova_compute[280228]: 2025-12-05 10:20:26.924 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:27 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:20:27 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3489022920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.117 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:20:27 localhost openstack_network_exporter[241668]: ERROR 10:20:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:20:27 localhost openstack_network_exporter[241668]: ERROR 10:20:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:20:27 localhost openstack_network_exporter[241668]: ERROR 10:20:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:20:27 localhost openstack_network_exporter[241668]: ERROR 10:20:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:20:27 localhost openstack_network_exporter[241668]: Dec 5 05:20:27 localhost openstack_network_exporter[241668]: ERROR 10:20:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:20:27 localhost openstack_network_exporter[241668]: Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.185 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.186 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.386 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.388 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11052MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.388 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.389 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.470 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.470 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.471 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.502 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.527 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.527 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.542 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 05:20:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 87 KiB/s wr, 6 op/s Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.585 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 05:20:27 localhost nova_compute[280228]: 2025-12-05 10:20:27.664 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:20:27 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "format": "json"}]: dispatch Dec 5 05:20:27 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fece2d2-49a0-4615-93c1-5c0350069189, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:27 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fece2d2-49a0-4615-93c1-5c0350069189, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:27 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:20:27.827+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fece2d2-49a0-4615-93c1-5c0350069189' of type subvolume Dec 5 05:20:27 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fece2d2-49a0-4615-93c1-5c0350069189' of type subvolume Dec 5 05:20:27 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fece2d2-49a0-4615-93c1-5c0350069189", "force": true, "format": "json"}]: dispatch Dec 5 05:20:27 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:27 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fece2d2-49a0-4615-93c1-5c0350069189'' moved to trashcan Dec 5 05:20:27 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:20:27 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fece2d2-49a0-4615-93c1-5c0350069189, vol_name:cephfs) < "" Dec 5 05:20:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:20:28 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2965429103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:20:28 localhost nova_compute[280228]: 2025-12-05 10:20:28.139 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:20:28 localhost nova_compute[280228]: 2025-12-05 10:20:28.147 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:20:28 localhost nova_compute[280228]: 2025-12-05 10:20:28.162 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:20:28 localhost nova_compute[280228]: 2025-12-05 10:20:28.164 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:20:28 localhost nova_compute[280228]: 2025-12-05 10:20:28.165 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:20:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:20:29 localhost podman[326527]: 2025-12-05 10:20:29.227323796 +0000 UTC m=+0.093711775 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:20:29 localhost systemd[1]: tmp-crun.8R3GkW.mount: Deactivated successfully. Dec 5 05:20:29 localhost podman[326525]: 2025-12-05 10:20:29.279214448 +0000 UTC m=+0.152987194 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:20:29 localhost podman[326525]: 2025-12-05 10:20:29.315925694 +0000 UTC m=+0.189698440 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:20:29 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:20:29 localhost podman[326527]: 2025-12-05 10:20:29.330060077 +0000 UTC m=+0.196448106 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:20:29 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:20:29 localhost podman[326526]: 2025-12-05 10:20:29.32199539 +0000 UTC m=+0.191290918 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:20:29 localhost podman[326526]: 2025-12-05 10:20:29.407868604 +0000 UTC m=+0.277164152 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:20:29 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:20:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s wr, 4 op/s Dec 5 05:20:29 localhost nova_compute[280228]: 2025-12-05 10:20:29.723 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:30 localhost nova_compute[280228]: 2025-12-05 10:20:30.142 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:30 localhost nova_compute[280228]: 2025-12-05 10:20:30.142 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "66fc337c-1267-4a35-81f5-115366d33363", "format": "json"}]: dispatch Dec 5 05:20:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:66fc337c-1267-4a35-81f5-115366d33363, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:66fc337c-1267-4a35-81f5-115366d33363, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:31 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:20:31.025+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66fc337c-1267-4a35-81f5-115366d33363' of type subvolume Dec 5 05:20:31 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '66fc337c-1267-4a35-81f5-115366d33363' of type subvolume Dec 5 05:20:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "66fc337c-1267-4a35-81f5-115366d33363", "force": true, "format": "json"}]: dispatch Dec 5 05:20:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/66fc337c-1267-4a35-81f5-115366d33363'' moved to trashcan Dec 5 05:20:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:20:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:66fc337c-1267-4a35-81f5-115366d33363, vol_name:cephfs) < "" Dec 5 05:20:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 109 KiB/s wr, 6 op/s Dec 5 05:20:31 localhost nova_compute[280228]: 2025-12-05 10:20:31.965 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.106 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.133 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Triggering sync for uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.134 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.134 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.166 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "96a47a1c-57c7-4bb1-aecc-33db976db8c7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.535 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.535 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:20:32 localhost nova_compute[280228]: 2025-12-05 10:20:32.536 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:20:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 62 KiB/s wr, 3 op/s Dec 5 05:20:33 localhost nova_compute[280228]: 2025-12-05 10:20:33.876 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:20:33 localhost nova_compute[280228]: 2025-12-05 10:20:33.877 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:20:33 localhost nova_compute[280228]: 2025-12-05 10:20:33.877 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:20:33 localhost nova_compute[280228]: 2025-12-05 10:20:33.878 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:20:34 localhost nova_compute[280228]: 2025-12-05 10:20:34.769 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.206 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.402 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.403 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.403 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.404 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.404 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:35 localhost nova_compute[280228]: 2025-12-05 10:20:35.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 62 KiB/s wr, 3 op/s Dec 5 05:20:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:37 localhost nova_compute[280228]: 2025-12-05 10:20:37.003 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 69 KiB/s wr, 4 op/s Dec 5 05:20:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:20:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:20:38 localhost podman[326585]: 2025-12-05 10:20:38.206796583 +0000 UTC m=+0.088743853 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:20:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:20:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f12607d2-3489-40f0-b5a2-c89b74945c90, vol_name:cephfs) < "" Dec 5 05:20:38 localhost podman[326585]: 2025-12-05 10:20:38.218634546 +0000 UTC m=+0.100581756 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:20:38 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:20:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f12607d2-3489-40f0-b5a2-c89b74945c90/.meta.tmp' Dec 5 05:20:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f12607d2-3489-40f0-b5a2-c89b74945c90/.meta.tmp' to config b'/volumes/_nogroup/f12607d2-3489-40f0-b5a2-c89b74945c90/.meta' Dec 5 05:20:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f12607d2-3489-40f0-b5a2-c89b74945c90, vol_name:cephfs) < "" Dec 5 05:20:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "format": "json"}]: dispatch Dec 5 05:20:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f12607d2-3489-40f0-b5a2-c89b74945c90, vol_name:cephfs) < "" Dec 5 05:20:38 localhost podman[326584]: 2025-12-05 10:20:38.299371022 +0000 UTC m=+0.185314076 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:20:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f12607d2-3489-40f0-b5a2-c89b74945c90, vol_name:cephfs) < "" Dec 5 05:20:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:20:38 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:20:38 localhost podman[326584]: 2025-12-05 10:20:38.372408802 +0000 UTC m=+0.258351906 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:20:38 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:20:38 localhost nova_compute[280228]: 2025-12-05 10:20:38.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:20:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 40 KiB/s wr, 2 op/s Dec 5 05:20:39 localhost nova_compute[280228]: 2025-12-05 10:20:39.800 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 53 KiB/s wr, 3 op/s Dec 5 05:20:42 localhost nova_compute[280228]: 2025-12-05 10:20:42.044 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:42 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "format": "json"}]: dispatch Dec 5 05:20:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f12607d2-3489-40f0-b5a2-c89b74945c90, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f12607d2-3489-40f0-b5a2-c89b74945c90, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:42 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:20:42.221+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f12607d2-3489-40f0-b5a2-c89b74945c90' of type subvolume Dec 5 05:20:42 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f12607d2-3489-40f0-b5a2-c89b74945c90' of type subvolume Dec 5 05:20:42 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f12607d2-3489-40f0-b5a2-c89b74945c90", "force": true, "format": "json"}]: dispatch Dec 5 05:20:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f12607d2-3489-40f0-b5a2-c89b74945c90, vol_name:cephfs) < "" Dec 5 05:20:42 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f12607d2-3489-40f0-b5a2-c89b74945c90'' moved to trashcan Dec 5 05:20:42 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:20:42 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f12607d2-3489-40f0-b5a2-c89b74945c90, vol_name:cephfs) < "" Dec 5 05:20:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 21 KiB/s wr, 2 op/s Dec 5 05:20:44 localhost nova_compute[280228]: 2025-12-05 10:20:44.803 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:20:45.102 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:20:45 localhost nova_compute[280228]: 2025-12-05 10:20:45.102 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:45 localhost ovn_metadata_agent[158815]: 2025-12-05 10:20:45.104 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:20:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:20:45 Dec 5 05:20:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:20:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:20:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['backups', 'images', 'manila_metadata', 'vms', '.mgr', 'manila_data', 'volumes'] Dec 5 05:20:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:20:45 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6bd85ed5-06af-4874-ad2a-45b58df56273, vol_name:cephfs) < "" Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6bd85ed5-06af-4874-ad2a-45b58df56273/.meta.tmp' Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6bd85ed5-06af-4874-ad2a-45b58df56273/.meta.tmp' to config b'/volumes/_nogroup/6bd85ed5-06af-4874-ad2a-45b58df56273/.meta' Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6bd85ed5-06af-4874-ad2a-45b58df56273, vol_name:cephfs) < "" Dec 5 05:20:45 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "format": "json"}]: dispatch Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6bd85ed5-06af-4874-ad2a-45b58df56273, vol_name:cephfs) < "" Dec 5 05:20:45 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6bd85ed5-06af-4874-ad2a-45b58df56273, vol_name:cephfs) < "" Dec 5 05:20:45 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:20:45 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:20:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 21 KiB/s wr, 2 op/s Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:20:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0018996894193188164 of space, bias 4.0, pg target 1.512152777777778 quantized to 16 (current 16) Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:20:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:20:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:20:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:20:46 localhost ovn_metadata_agent[158815]: 2025-12-05 10:20:46.106 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:20:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:47 localhost nova_compute[280228]: 2025-12-05 10:20:47.088 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 42 KiB/s wr, 3 op/s Dec 5 05:20:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "format": "json"}]: dispatch Dec 5 05:20:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6bd85ed5-06af-4874-ad2a-45b58df56273, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6bd85ed5-06af-4874-ad2a-45b58df56273, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:20:49 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6bd85ed5-06af-4874-ad2a-45b58df56273' of type subvolume Dec 5 05:20:49 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:20:49.013+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6bd85ed5-06af-4874-ad2a-45b58df56273' of type subvolume Dec 5 05:20:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6bd85ed5-06af-4874-ad2a-45b58df56273", "force": true, "format": "json"}]: dispatch Dec 5 05:20:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6bd85ed5-06af-4874-ad2a-45b58df56273, vol_name:cephfs) < "" Dec 5 05:20:49 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6bd85ed5-06af-4874-ad2a-45b58df56273'' moved to trashcan Dec 5 05:20:49 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:20:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6bd85ed5-06af-4874-ad2a-45b58df56273, vol_name:cephfs) < "" Dec 5 05:20:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:20:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:20:49 localhost podman[326633]: 2025-12-05 10:20:49.210424001 +0000 UTC m=+0.086424971 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7) Dec 5 05:20:49 localhost podman[326633]: 2025-12-05 10:20:49.247738776 +0000 UTC m=+0.123739726 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 05:20:49 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:20:49 localhost podman[326632]: 2025-12-05 10:20:49.32742488 +0000 UTC m=+0.207544746 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:20:49 localhost podman[326632]: 2025-12-05 10:20:49.341687227 +0000 UTC m=+0.221807093 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 05:20:49 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:20:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 34 KiB/s wr, 2 op/s Dec 5 05:20:49 localhost nova_compute[280228]: 2025-12-05 10:20:49.806 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:49 localhost podman[239519]: time="2025-12-05T10:20:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:20:49 localhost podman[239519]: @ - - [05/Dec/2025:10:20:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:20:49 localhost podman[239519]: @ - - [05/Dec/2025:10:20:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19271 "" "Go-http-client/1.1" Dec 5 05:20:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 58 KiB/s wr, 3 op/s Dec 5 05:20:52 localhost nova_compute[280228]: 2025-12-05 10:20:52.118 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s Dec 5 05:20:54 localhost nova_compute[280228]: 2025-12-05 10:20:54.810 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s Dec 5 05:20:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:20:57 localhost nova_compute[280228]: 2025-12-05 10:20:57.159 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:20:57 localhost openstack_network_exporter[241668]: ERROR 10:20:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:20:57 localhost openstack_network_exporter[241668]: ERROR 10:20:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:20:57 localhost openstack_network_exporter[241668]: ERROR 10:20:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:20:57 localhost openstack_network_exporter[241668]: ERROR 10:20:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:20:57 localhost openstack_network_exporter[241668]: Dec 5 05:20:57 localhost openstack_network_exporter[241668]: ERROR 10:20:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:20:57 localhost openstack_network_exporter[241668]: Dec 5 05:20:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 51 KiB/s wr, 3 op/s Dec 5 05:20:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 30 KiB/s wr, 1 op/s Dec 5 05:20:59 localhost nova_compute[280228]: 2025-12-05 10:20:59.812 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:21:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:21:00 localhost systemd[1]: tmp-crun.VDF1ly.mount: Deactivated successfully. Dec 5 05:21:00 localhost podman[326672]: 2025-12-05 10:21:00.220358363 +0000 UTC m=+0.106318922 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:21:00 localhost systemd[1]: tmp-crun.ZScJa5.mount: Deactivated successfully. Dec 5 05:21:00 localhost podman[326673]: 2025-12-05 10:21:00.267067425 +0000 UTC m=+0.147540925 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:21:00 localhost podman[326673]: 2025-12-05 10:21:00.297104407 +0000 UTC m=+0.177577907 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:21:00 localhost podman[326674]: 2025-12-05 10:21:00.308083104 +0000 UTC m=+0.185234323 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 5 05:21:00 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:21:00 localhost podman[326674]: 2025-12-05 10:21:00.315943735 +0000 UTC m=+0.193094984 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 5 05:21:00 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:21:00 localhost podman[326672]: 2025-12-05 10:21:00.335813364 +0000 UTC m=+0.221773963 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:21:00 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:21:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:21:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, vol_name:cephfs) < "" Dec 5 05:21:01 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af7aba7d-7155-4dea-94b8-6a42535f8b87/.meta.tmp' Dec 5 05:21:01 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af7aba7d-7155-4dea-94b8-6a42535f8b87/.meta.tmp' to config b'/volumes/_nogroup/af7aba7d-7155-4dea-94b8-6a42535f8b87/.meta' Dec 5 05:21:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, vol_name:cephfs) < "" Dec 5 05:21:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "format": "json"}]: dispatch Dec 5 05:21:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, vol_name:cephfs) < "" Dec 5 05:21:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, vol_name:cephfs) < "" Dec 5 05:21:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:21:01 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:21:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 36 KiB/s wr, 2 op/s Dec 5 05:21:02 localhost nova_compute[280228]: 2025-12-05 10:21:02.208 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 12 KiB/s wr, 1 op/s Dec 5 05:21:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:21:03.927 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:21:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:21:03.927 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:21:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:21:03.928 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:21:04 localhost nova_compute[280228]: 2025-12-05 10:21:04.818 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:21:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, vol_name:cephfs) < "" Dec 5 05:21:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8bdb3abd-9cce-44c0-b84e-bfddf34d9553/.meta.tmp' Dec 5 05:21:04 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8bdb3abd-9cce-44c0-b84e-bfddf34d9553/.meta.tmp' to config b'/volumes/_nogroup/8bdb3abd-9cce-44c0-b84e-bfddf34d9553/.meta' Dec 5 05:21:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, vol_name:cephfs) < "" Dec 5 05:21:04 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "format": "json"}]: dispatch Dec 5 05:21:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, vol_name:cephfs) < "" Dec 5 05:21:04 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, vol_name:cephfs) < "" Dec 5 05:21:04 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:21:04 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:21:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 12 KiB/s wr, 1 op/s Dec 5 05:21:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:07 localhost nova_compute[280228]: 2025-12-05 10:21:07.255 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 33 KiB/s wr, 2 op/s Dec 5 05:21:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:21:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f5a657ad-5944-498a-8a0f-804036d0f99b, vol_name:cephfs) < "" Dec 5 05:21:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f5a657ad-5944-498a-8a0f-804036d0f99b/.meta.tmp' Dec 5 05:21:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f5a657ad-5944-498a-8a0f-804036d0f99b/.meta.tmp' to config b'/volumes/_nogroup/f5a657ad-5944-498a-8a0f-804036d0f99b/.meta' Dec 5 05:21:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f5a657ad-5944-498a-8a0f-804036d0f99b, vol_name:cephfs) < "" Dec 5 05:21:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "format": "json"}]: dispatch Dec 5 05:21:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f5a657ad-5944-498a-8a0f-804036d0f99b, vol_name:cephfs) < "" Dec 5 05:21:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f5a657ad-5944-498a-8a0f-804036d0f99b, vol_name:cephfs) < "" Dec 5 05:21:08 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:21:08 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:21:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:21:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:21:09 localhost podman[326736]: 2025-12-05 10:21:09.204981598 +0000 UTC m=+0.083850813 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:21:09 localhost podman[326736]: 2025-12-05 10:21:09.214800949 +0000 UTC m=+0.093670214 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:21:09 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:21:09 localhost podman[326735]: 2025-12-05 10:21:09.307946966 +0000 UTC m=+0.188623246 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 5 05:21:09 localhost podman[326735]: 2025-12-05 10:21:09.371909407 +0000 UTC m=+0.252585727 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125) Dec 5 05:21:09 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:21:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s wr, 1 op/s Dec 5 05:21:09 localhost nova_compute[280228]: 2025-12-05 10:21:09.853 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:21:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0. Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.365331) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071365394, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2440, "num_deletes": 263, "total_data_size": 2157479, "memory_usage": 2204904, "flush_reason": "Manual Compaction"} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071379142, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 2096414, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38956, "largest_seqno": 41395, "table_properties": {"data_size": 2086346, "index_size": 6193, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24560, "raw_average_key_size": 21, "raw_value_size": 2064792, "raw_average_value_size": 1846, "num_data_blocks": 269, "num_entries": 1118, "num_filter_entries": 1118, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764929937, "oldest_key_time": 1764929937, "file_creation_time": 1764930071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 13858 microseconds, and 6017 cpu microseconds. Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.379189) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 2096414 bytes OK Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.379214) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.381494) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.381520) EVENT_LOG_v1 {"time_micros": 1764930071381512, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.381542) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2146736, prev total WAL file size 2146736, number of live WAL files 2. Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.382460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(2047KB)], [69(18MB)] Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071382507, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 21096865, "oldest_snapshot_seqno": -1} Dec 5 05:21:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta.tmp' Dec 5 05:21:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta.tmp' to config b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta' Dec 5 05:21:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "format": "json"}]: dispatch Dec 5 05:21:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:21:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14592 keys, 19714189 bytes, temperature: kUnknown Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071502179, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 19714189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19627028, "index_size": 49620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36485, "raw_key_size": 387913, "raw_average_key_size": 26, "raw_value_size": 19375542, "raw_average_value_size": 1327, "num_data_blocks": 1874, "num_entries": 14592, "num_filter_entries": 14592, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764930071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.502670) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 19714189 bytes Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.504632) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.1 rd, 164.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 18.1 +0.0 blob) out(18.8 +0.0 blob), read-write-amplify(19.5) write-amplify(9.4) OK, records in: 15133, records dropped: 541 output_compression: NoCompression Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.504663) EVENT_LOG_v1 {"time_micros": 1764930071504649, "job": 42, "event": "compaction_finished", "compaction_time_micros": 119825, "compaction_time_cpu_micros": 53886, "output_level": 6, "num_output_files": 1, "total_output_size": 19714189, "num_input_records": 15133, "num_output_records": 14592, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071505183, "job": 42, "event": "table_file_deletion", "file_number": 71} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930071508376, "job": 42, "event": "table_file_deletion", "file_number": 69} Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.382374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.508478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.508487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.508490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.508493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:11 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:11.508495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s wr, 3 op/s Dec 5 05:21:12 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:21:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53bd909c-da2c-4390-a803-9471f6d51aba, vol_name:cephfs) < "" Dec 5 05:21:12 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53bd909c-da2c-4390-a803-9471f6d51aba/.meta.tmp' Dec 5 05:21:12 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53bd909c-da2c-4390-a803-9471f6d51aba/.meta.tmp' to config b'/volumes/_nogroup/53bd909c-da2c-4390-a803-9471f6d51aba/.meta' Dec 5 05:21:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53bd909c-da2c-4390-a803-9471f6d51aba, vol_name:cephfs) < "" Dec 5 05:21:12 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "format": "json"}]: dispatch Dec 5 05:21:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53bd909c-da2c-4390-a803-9471f6d51aba, vol_name:cephfs) < "" Dec 5 05:21:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53bd909c-da2c-4390-a803-9471f6d51aba, vol_name:cephfs) < "" Dec 5 05:21:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:21:12 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:21:12 localhost nova_compute[280228]: 2025-12-05 10:21:12.286 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s wr, 2 op/s Dec 5 05:21:14 localhost nova_compute[280228]: 2025-12-05 10:21:14.879 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "snap_name": "7a976cff-8d23-40b2-8d65-cb0f4826218e", "format": "json"}]: dispatch Dec 5 05:21:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a976cff-8d23-40b2-8d65-cb0f4826218e, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a976cff-8d23-40b2-8d65-cb0f4826218e, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:21:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s wr, 2 op/s Dec 5 05:21:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "format": "json"}]: dispatch Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:53bd909c-da2c-4390-a803-9471f6d51aba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:53bd909c-da2c-4390-a803-9471f6d51aba, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:15 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:21:15.943+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53bd909c-da2c-4390-a803-9471f6d51aba' of type subvolume Dec 5 05:21:15 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53bd909c-da2c-4390-a803-9471f6d51aba' of type subvolume Dec 5 05:21:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53bd909c-da2c-4390-a803-9471f6d51aba", "force": true, "format": "json"}]: dispatch Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53bd909c-da2c-4390-a803-9471f6d51aba, vol_name:cephfs) < "" Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/53bd909c-da2c-4390-a803-9471f6d51aba'' moved to trashcan Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:21:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53bd909c-da2c-4390-a803-9471f6d51aba, vol_name:cephfs) < "" Dec 5 05:21:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:21:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:21:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:17 localhost nova_compute[280228]: 2025-12-05 10:21:17.328 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s wr, 4 op/s Dec 5 05:21:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "snap_name": "7a976cff-8d23-40b2-8d65-cb0f4826218e_16c0bfc8-b2b3-41ef-8ae1-a3a466e5482d", "force": true, "format": "json"}]: dispatch Dec 5 05:21:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a976cff-8d23-40b2-8d65-cb0f4826218e_16c0bfc8-b2b3-41ef-8ae1-a3a466e5482d, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta.tmp' Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta.tmp' to config b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta' Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a976cff-8d23-40b2-8d65-cb0f4826218e_16c0bfc8-b2b3-41ef-8ae1-a3a466e5482d, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "snap_name": "7a976cff-8d23-40b2-8d65-cb0f4826218e", "force": true, "format": "json"}]: dispatch Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a976cff-8d23-40b2-8d65-cb0f4826218e, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta.tmp' Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta.tmp' to config b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab/.meta' Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a976cff-8d23-40b2-8d65-cb0f4826218e, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "format": "json"}]: dispatch Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f5a657ad-5944-498a-8a0f-804036d0f99b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f5a657ad-5944-498a-8a0f-804036d0f99b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:21:19.223+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f5a657ad-5944-498a-8a0f-804036d0f99b' of type subvolume Dec 5 05:21:19 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f5a657ad-5944-498a-8a0f-804036d0f99b' of type subvolume Dec 5 05:21:19 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f5a657ad-5944-498a-8a0f-804036d0f99b", "force": true, "format": "json"}]: dispatch Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f5a657ad-5944-498a-8a0f-804036d0f99b, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f5a657ad-5944-498a-8a0f-804036d0f99b'' moved to trashcan Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:21:19 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f5a657ad-5944-498a-8a0f-804036d0f99b, vol_name:cephfs) < "" Dec 5 05:21:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s wr, 2 op/s Dec 5 05:21:19 localhost podman[239519]: time="2025-12-05T10:21:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:21:19 localhost podman[239519]: @ - - [05/Dec/2025:10:21:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:21:19 localhost nova_compute[280228]: 2025-12-05 10:21:19.934 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:19 localhost podman[239519]: @ - - [05/Dec/2025:10:21:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19283 "" "Go-http-client/1.1" Dec 5 05:21:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:21:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:21:20 localhost systemd[1]: tmp-crun.Tm5ZTa.mount: Deactivated successfully. Dec 5 05:21:20 localhost podman[326785]: 2025-12-05 10:21:20.196660661 +0000 UTC m=+0.083546452 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:21:20 localhost podman[326785]: 2025-12-05 10:21:20.234158792 +0000 UTC m=+0.121044583 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:21:20 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:21:20 localhost podman[326786]: 2025-12-05 10:21:20.251482273 +0000 UTC m=+0.134875627 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container) Dec 5 05:21:20 localhost podman[326786]: 2025-12-05 10:21:20.293740719 +0000 UTC m=+0.177134053 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container) Dec 5 05:21:20 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:21:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:21:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:21:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:21:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:21:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:21:21 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:21:21 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev e7092ff2-4c9a-4b92-930d-baa0c694c7f6 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:21:21 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev e7092ff2-4c9a-4b92-930d-baa0c694c7f6 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:21:21 localhost ceph-mgr[286454]: [progress INFO root] Completed event e7092ff2-4c9a-4b92-930d-baa0c694c7f6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:21:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:21:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:21:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 93 KiB/s wr, 4 op/s Dec 5 05:21:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e284 do_prune osdmap full prune enabled Dec 5 05:21:22 localhost nova_compute[280228]: 2025-12-05 10:21:22.371 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:22 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:21:22 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:21:22 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e285 e285: 6 total, 6 up, 6 in Dec 5 05:21:22 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e285: 6 total, 6 up, 6 in Dec 5 05:21:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "format": "json"}]: dispatch Dec 5 05:21:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:22 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:21:22.826+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bdb3abd-9cce-44c0-b84e-bfddf34d9553' of type subvolume Dec 5 05:21:22 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8bdb3abd-9cce-44c0-b84e-bfddf34d9553' of type subvolume Dec 5 05:21:22 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8bdb3abd-9cce-44c0-b84e-bfddf34d9553", "force": true, "format": "json"}]: dispatch Dec 5 05:21:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, vol_name:cephfs) < "" Dec 5 05:21:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8bdb3abd-9cce-44c0-b84e-bfddf34d9553'' moved to trashcan Dec 5 05:21:22 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:21:22 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8bdb3abd-9cce-44c0-b84e-bfddf34d9553, vol_name:cephfs) < "" Dec 5 05:21:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "format": "json"}]: dispatch Dec 5 05:21:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:23 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab' of type subvolume Dec 5 05:21:23 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:21:23.286+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab' of type subvolume Dec 5 05:21:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab", "force": true, "format": "json"}]: dispatch Dec 5 05:21:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:23 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab'' moved to trashcan Dec 5 05:21:23 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:21:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:163caa8a-70a5-4e3b-b6d5-ca1ae139f3ab, vol_name:cephfs) < "" Dec 5 05:21:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 80 KiB/s wr, 4 op/s Dec 5 05:21:24 localhost nova_compute[280228]: 2025-12-05 10:21:24.970 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 80 KiB/s wr, 4 op/s Dec 5 05:21:25 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:21:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:21:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:21:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "format": "json"}]: dispatch Dec 5 05:21:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:21:26 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:21:26.227+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af7aba7d-7155-4dea-94b8-6a42535f8b87' of type subvolume Dec 5 05:21:26 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af7aba7d-7155-4dea-94b8-6a42535f8b87' of type subvolume Dec 5 05:21:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af7aba7d-7155-4dea-94b8-6a42535f8b87", "force": true, "format": "json"}]: dispatch Dec 5 05:21:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, vol_name:cephfs) < "" Dec 5 05:21:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af7aba7d-7155-4dea-94b8-6a42535f8b87'' moved to trashcan Dec 5 05:21:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:21:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af7aba7d-7155-4dea-94b8-6a42535f8b87, vol_name:cephfs) < "" Dec 5 05:21:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:21:27 localhost openstack_network_exporter[241668]: ERROR 10:21:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:21:27 localhost openstack_network_exporter[241668]: ERROR 10:21:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:21:27 localhost openstack_network_exporter[241668]: ERROR 10:21:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:21:27 localhost openstack_network_exporter[241668]: ERROR 10:21:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:21:27 localhost openstack_network_exporter[241668]: Dec 5 05:21:27 localhost openstack_network_exporter[241668]: ERROR 10:21:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:21:27 localhost openstack_network_exporter[241668]: Dec 5 05:21:27 localhost nova_compute[280228]: 2025-12-05 10:21:27.396 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 80 KiB/s wr, 5 op/s Dec 5 05:21:28 localhost nova_compute[280228]: 2025-12-05 10:21:28.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:28 localhost nova_compute[280228]: 2025-12-05 10:21:28.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:28 localhost nova_compute[280228]: 2025-12-05 10:21:28.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:29 localhost nova_compute[280228]: 2025-12-05 10:21:29.101 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:21:29 localhost nova_compute[280228]: 2025-12-05 10:21:29.102 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:21:29 localhost nova_compute[280228]: 2025-12-05 10:21:29.103 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:21:29 localhost nova_compute[280228]: 2025-12-05 10:21:29.103 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:21:29 localhost nova_compute[280228]: 2025-12-05 10:21:29.104 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:21:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:21:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/4107184619' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:21:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 80 KiB/s wr, 5 op/s Dec 5 05:21:29 localhost nova_compute[280228]: 2025-12-05 10:21:29.576 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:21:30 localhost nova_compute[280228]: 2025-12-05 10:21:30.005 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:21:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:21:31 localhost systemd[1]: tmp-crun.39R4a0.mount: Deactivated successfully. Dec 5 05:21:31 localhost podman[326931]: 2025-12-05 10:21:31.206774781 +0000 UTC m=+0.082801770 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:21:31 localhost podman[326931]: 2025-12-05 10:21:31.24749252 +0000 UTC m=+0.123519499 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:21:31 localhost podman[326932]: 2025-12-05 10:21:31.298209385 +0000 UTC m=+0.172287904 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 5 05:21:31 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:21:31 localhost podman[326933]: 2025-12-05 10:21:31.322507091 +0000 UTC m=+0.193829816 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:21:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e285 do_prune osdmap full prune enabled Dec 5 05:21:31 localhost podman[326932]: 2025-12-05 10:21:31.332790796 +0000 UTC m=+0.206869265 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 5 05:21:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 e286: 6 total, 6 up, 6 in Dec 5 05:21:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e286: 6 total, 6 up, 6 in Dec 5 05:21:31 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:21:31 localhost podman[326933]: 2025-12-05 10:21:31.359752483 +0000 UTC m=+0.231075188 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 5 05:21:31 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:21:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 892 B/s rd, 65 KiB/s wr, 5 op/s Dec 5 05:21:32 localhost nova_compute[280228]: 2025-12-05 10:21:32.428 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 60 KiB/s wr, 4 op/s Dec 5 05:21:33 localhost nova_compute[280228]: 2025-12-05 10:21:33.605 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:21:33 localhost nova_compute[280228]: 2025-12-05 10:21:33.606 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:21:33 localhost nova_compute[280228]: 2025-12-05 10:21:33.828 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:21:33 localhost nova_compute[280228]: 2025-12-05 10:21:33.829 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11030MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:21:33 localhost nova_compute[280228]: 2025-12-05 10:21:33.830 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:21:33 localhost nova_compute[280228]: 2025-12-05 10:21:33.830 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:21:35 localhost nova_compute[280228]: 2025-12-05 10:21:35.051 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 60 KiB/s wr, 4 op/s Dec 5 05:21:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:37 localhost nova_compute[280228]: 2025-12-05 10:21:37.472 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 21 KiB/s wr, 1 op/s Dec 5 05:21:37 localhost nova_compute[280228]: 2025-12-05 10:21:37.729 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:21:37 localhost nova_compute[280228]: 2025-12-05 10:21:37.730 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:21:37 localhost nova_compute[280228]: 2025-12-05 10:21:37.730 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:21:37 localhost nova_compute[280228]: 2025-12-05 10:21:37.799 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:21:38 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:21:38 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3997336507' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:21:38 localhost nova_compute[280228]: 2025-12-05 10:21:38.226 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:21:38 localhost nova_compute[280228]: 2025-12-05 10:21:38.232 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:21:38 localhost nova_compute[280228]: 2025-12-05 10:21:38.267 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:21:38 localhost nova_compute[280228]: 2025-12-05 10:21:38.269 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:21:38 localhost nova_compute[280228]: 2025-12-05 10:21:38.270 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.266 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.266 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.440 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.441 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.441 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:21:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 21 KiB/s wr, 1 op/s Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.878 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.878 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.878 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:21:39 localhost nova_compute[280228]: 2025-12-05 10:21:39.879 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:21:40 localhost nova_compute[280228]: 2025-12-05 10:21:40.094 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:21:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:21:40 localhost podman[327009]: 2025-12-05 10:21:40.210175866 +0000 UTC m=+0.091222369 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:21:40 localhost systemd[1]: tmp-crun.0m8wWP.mount: Deactivated successfully. Dec 5 05:21:40 localhost podman[327010]: 2025-12-05 10:21:40.271071573 +0000 UTC m=+0.147995290 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:21:40 localhost podman[327009]: 2025-12-05 10:21:40.277929764 +0000 UTC m=+0.158976247 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 05:21:40 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:21:40 localhost podman[327010]: 2025-12-05 10:21:40.335229001 +0000 UTC m=+0.212152668 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:21:40 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:21:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0. Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.346800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101346851, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 630, "num_deletes": 258, "total_data_size": 418041, "memory_usage": 431128, "flush_reason": "Manual Compaction"} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101353415, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 410462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41396, "largest_seqno": 42025, "table_properties": {"data_size": 407205, "index_size": 1112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8076, "raw_average_key_size": 19, "raw_value_size": 400310, "raw_average_value_size": 959, "num_data_blocks": 49, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930072, "oldest_key_time": 1764930072, "file_creation_time": 1764930101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 6667 microseconds, and 2078 cpu microseconds. Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.353468) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 410462 bytes OK Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.353496) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.355657) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.355677) EVENT_LOG_v1 {"time_micros": 1764930101355671, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.355697) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 414573, prev total WAL file size 414897, number of live WAL files 2. Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.356218) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353137' seq:72057594037927935, type:22 .. '6C6F676D0034373730' seq:0, type:0; will stop at (end) Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(400KB)], [72(18MB)] Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101356281, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 20124651, "oldest_snapshot_seqno": -1} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14472 keys, 19990805 bytes, temperature: kUnknown Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101472077, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 19990805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19903366, "index_size": 50154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 386525, "raw_average_key_size": 26, "raw_value_size": 19652841, "raw_average_value_size": 1357, "num_data_blocks": 1894, "num_entries": 14472, "num_filter_entries": 14472, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764930101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.472478) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 19990805 bytes Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.474446) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 172.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 18.8 +0.0 blob) out(19.1 +0.0 blob), read-write-amplify(97.7) write-amplify(48.7) OK, records in: 15009, records dropped: 537 output_compression: NoCompression Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.474475) EVENT_LOG_v1 {"time_micros": 1764930101474463, "job": 44, "event": "compaction_finished", "compaction_time_micros": 115883, "compaction_time_cpu_micros": 51980, "output_level": 6, "num_output_files": 1, "total_output_size": 19990805, "num_input_records": 15009, "num_output_records": 14472, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101474667, "job": 44, "event": "table_file_deletion", "file_number": 74} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930101477200, "job": 44, "event": "table_file_deletion", "file_number": 72} Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.356134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.477345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.477354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.477357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.477360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:41 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:41.477363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 199 B/s rd, 23 KiB/s wr, 1 op/s Dec 5 05:21:41 localhost nova_compute[280228]: 2025-12-05 10:21:41.992 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.230 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.231 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.232 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.233 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.233 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.233 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.234 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:21:42 localhost nova_compute[280228]: 2025-12-05 10:21:42.515 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.6 KiB/s wr, 0 op/s Dec 5 05:21:45 localhost nova_compute[280228]: 2025-12-05 10:21:45.139 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:21:45 Dec 5 05:21:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:21:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:21:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_data', 'volumes', 'images', 'vms', '.mgr', 'manila_metadata', 'backups'] Dec 5 05:21:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:21:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:21:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:21:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:21:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:21:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.5 KiB/s wr, 0 op/s Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:21:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00208071608040201 of space, bias 4.0, pg target 1.65625 quantized to 16 (current 16) Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:21:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:21:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:21:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:21:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:47 localhost ceph-mgr[286454]: [devicehealth INFO root] Check health Dec 5 05:21:47 localhost nova_compute[280228]: 2025-12-05 10:21:47.563 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v799: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.5 KiB/s wr, 0 op/s Dec 5 05:21:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s wr, 0 op/s Dec 5 05:21:49 localhost podman[239519]: time="2025-12-05T10:21:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:21:49 localhost podman[239519]: @ - - [05/Dec/2025:10:21:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:21:49 localhost podman[239519]: @ - - [05/Dec/2025:10:21:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19266 "" "Go-http-client/1.1" Dec 5 05:21:50 localhost nova_compute[280228]: 2025-12-05 10:21:50.172 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:21:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:21:51 localhost podman[327057]: 2025-12-05 10:21:51.206701723 +0000 UTC m=+0.087746752 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350) Dec 5 05:21:51 localhost podman[327056]: 2025-12-05 10:21:51.252362113 +0000 UTC m=+0.137392174 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 5 05:21:51 localhost podman[327056]: 2025-12-05 10:21:51.290621257 +0000 UTC m=+0.175651298 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:21:51 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:21:51 localhost podman[327057]: 2025-12-05 10:21:51.304965977 +0000 UTC m=+0.186010956 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7) Dec 5 05:21:51 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:21:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0. Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.406083) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111406132, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 345, "num_deletes": 250, "total_data_size": 107468, "memory_usage": 114576, "flush_reason": "Manual Compaction"} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111409498, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 105176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42026, "largest_seqno": 42370, "table_properties": {"data_size": 103059, "index_size": 292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5909, "raw_average_key_size": 20, "raw_value_size": 98827, "raw_average_value_size": 339, "num_data_blocks": 13, "num_entries": 291, "num_filter_entries": 291, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930101, "oldest_key_time": 1764930101, "file_creation_time": 1764930111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 3460 microseconds, and 1094 cpu microseconds. Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.409543) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 105176 bytes OK Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.409563) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.412001) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.412022) EVENT_LOG_v1 {"time_micros": 1764930111412016, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.412041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 105139, prev total WAL file size 105463, number of live WAL files 2. Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.413669) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323537' seq:72057594037927935, type:22 .. '6D6772737461740034353038' seq:0, type:0; will stop at (end) Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(102KB)], [75(19MB)] Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111413713, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 20095981, "oldest_snapshot_seqno": -1} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 14250 keys, 17989707 bytes, temperature: kUnknown Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111534110, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 17989707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17908536, "index_size": 44462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 382102, "raw_average_key_size": 26, "raw_value_size": 17666657, "raw_average_value_size": 1239, "num_data_blocks": 1656, "num_entries": 14250, "num_filter_entries": 14250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764930111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.534480) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 17989707 bytes Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.536119) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 19.1 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(362.1) write-amplify(171.0) OK, records in: 14763, records dropped: 513 output_compression: NoCompression Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.536148) EVENT_LOG_v1 {"time_micros": 1764930111536135, "job": 46, "event": "compaction_finished", "compaction_time_micros": 120486, "compaction_time_cpu_micros": 50533, "output_level": 6, "num_output_files": 1, "total_output_size": 17989707, "num_input_records": 14763, "num_output_records": 14250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111536322, "job": 46, "event": "table_file_deletion", "file_number": 77} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930111538897, "job": 46, "event": "table_file_deletion", "file_number": 75} Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.413602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.539167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.539175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.539180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.539184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:21:51.539188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:21:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v801: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s wr, 0 op/s Dec 5 05:21:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:21:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta.tmp' Dec 5 05:21:51 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta.tmp' to config b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta' Dec 5 05:21:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "format": "json"}]: dispatch Dec 5 05:21:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:21:51 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:21:52 localhost nova_compute[280228]: 2025-12-05 10:21:52.611 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 5 05:21:55 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "snap_name": "ae208c69-a56a-463a-9bcc-3888ef448123", "format": "json"}]: dispatch Dec 5 05:21:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ae208c69-a56a-463a-9bcc-3888ef448123, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:55 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ae208c69-a56a-463a-9bcc-3888ef448123, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:55 localhost nova_compute[280228]: 2025-12-05 10:21:55.175 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail Dec 5 05:21:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:21:57 localhost openstack_network_exporter[241668]: ERROR 10:21:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:21:57 localhost openstack_network_exporter[241668]: ERROR 10:21:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:21:57 localhost openstack_network_exporter[241668]: ERROR 10:21:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:21:57 localhost openstack_network_exporter[241668]: ERROR 10:21:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:21:57 localhost openstack_network_exporter[241668]: Dec 5 05:21:57 localhost openstack_network_exporter[241668]: ERROR 10:21:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:21:57 localhost openstack_network_exporter[241668]: Dec 5 05:21:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v804: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Dec 5 05:21:57 localhost nova_compute[280228]: 2025-12-05 10:21:57.663 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:21:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "snap_name": "ae208c69-a56a-463a-9bcc-3888ef448123_5efb51d0-b619-4eae-8c1e-aa8a23f8d68c", "force": true, "format": "json"}]: dispatch Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ae208c69-a56a-463a-9bcc-3888ef448123_5efb51d0-b619-4eae-8c1e-aa8a23f8d68c, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta.tmp' Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta.tmp' to config b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta' Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ae208c69-a56a-463a-9bcc-3888ef448123_5efb51d0-b619-4eae-8c1e-aa8a23f8d68c, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "snap_name": "ae208c69-a56a-463a-9bcc-3888ef448123", "force": true, "format": "json"}]: dispatch Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ae208c69-a56a-463a-9bcc-3888ef448123, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta.tmp' Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta.tmp' to config b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356/.meta' Dec 5 05:21:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ae208c69-a56a-463a-9bcc-3888ef448123, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:21:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Dec 5 05:22:00 localhost nova_compute[280228]: 2025-12-05 10:22:00.184 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s Dec 5 05:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:22:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:22:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "format": "json"}]: dispatch Dec 5 05:22:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:02 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '489038a0-5fdb-4555-8e0a-7ee6c1d47356' of type subvolume Dec 5 05:22:02 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:02.151+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '489038a0-5fdb-4555-8e0a-7ee6c1d47356' of type subvolume Dec 5 05:22:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "489038a0-5fdb-4555-8e0a-7ee6c1d47356", "force": true, "format": "json"}]: dispatch Dec 5 05:22:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:22:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/489038a0-5fdb-4555-8e0a-7ee6c1d47356'' moved to trashcan Dec 5 05:22:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:22:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:489038a0-5fdb-4555-8e0a-7ee6c1d47356, vol_name:cephfs) < "" Dec 5 05:22:02 localhost podman[327096]: 2025-12-05 10:22:02.208680978 +0000 UTC m=+0.085535434 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:22:02 localhost podman[327096]: 2025-12-05 10:22:02.217410536 +0000 UTC m=+0.094264972 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:22:02 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:22:02 localhost podman[327094]: 2025-12-05 10:22:02.185980342 +0000 UTC m=+0.071224805 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:22:02 localhost systemd[1]: tmp-crun.cFDvNd.mount: Deactivated successfully. Dec 5 05:22:02 localhost podman[327095]: 2025-12-05 10:22:02.277679945 +0000 UTC m=+0.156888013 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 5 05:22:02 localhost podman[327095]: 2025-12-05 10:22:02.285597128 +0000 UTC m=+0.164805246 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 5 05:22:02 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:22:02 localhost podman[327094]: 2025-12-05 10:22:02.371838452 +0000 UTC m=+0.257082925 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:22:02 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:22:02 localhost nova_compute[280228]: 2025-12-05 10:22:02.698 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:22:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1945601657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:22:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:22:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1945601657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:22:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s Dec 5 05:22:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:22:03.929 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:22:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:22:03.929 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:22:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:22:03.930 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:22:05 localhost nova_compute[280228]: 2025-12-05 10:22:05.217 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:22:05 localhost ceph-osd[31386]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 20K writes, 78K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 20K writes, 7183 syncs, 2.80 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 31.70 MB, 0.05 MB/s#012Interval WAL: 10K writes, 4604 syncs, 2.32 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 05:22:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:22:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, vol_name:cephfs) < "" Dec 5 05:22:05 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/634bc3f3-1d76-4f32-ac6a-ef8f376d641b/.meta.tmp' Dec 5 05:22:05 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/634bc3f3-1d76-4f32-ac6a-ef8f376d641b/.meta.tmp' to config b'/volumes/_nogroup/634bc3f3-1d76-4f32-ac6a-ef8f376d641b/.meta' Dec 5 05:22:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, vol_name:cephfs) < "" Dec 5 05:22:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "format": "json"}]: dispatch Dec 5 05:22:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, vol_name:cephfs) < "" Dec 5 05:22:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, vol_name:cephfs) < "" Dec 5 05:22:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:22:05 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:22:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v808: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 27 KiB/s wr, 2 op/s Dec 5 05:22:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e286 do_prune osdmap full prune enabled Dec 5 05:22:07 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e287 e287: 6 total, 6 up, 6 in Dec 5 05:22:07 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e287: 6 total, 6 up, 6 in Dec 5 05:22:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s Dec 5 05:22:07 localhost nova_compute[280228]: 2025-12-05 10:22:07.736 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "format": "json"}]: dispatch Dec 5 05:22:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:08 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:08.676+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '634bc3f3-1d76-4f32-ac6a-ef8f376d641b' of type subvolume Dec 5 05:22:08 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '634bc3f3-1d76-4f32-ac6a-ef8f376d641b' of type subvolume Dec 5 05:22:08 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "634bc3f3-1d76-4f32-ac6a-ef8f376d641b", "force": true, "format": "json"}]: dispatch Dec 5 05:22:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, vol_name:cephfs) < "" Dec 5 05:22:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/634bc3f3-1d76-4f32-ac6a-ef8f376d641b'' moved to trashcan Dec 5 05:22:08 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:22:08 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:634bc3f3-1d76-4f32-ac6a-ef8f376d641b, vol_name:cephfs) < "" Dec 5 05:22:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:22:09 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.2 total, 600.0 interval#012Cumulative writes: 24K writes, 93K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s#012Cumulative WAL: 24K writes, 8928 syncs, 2.79 writes per sync, written: 0.08 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 37.20 MB, 0.06 MB/s#012Interval WAL: 11K writes, 4864 syncs, 2.38 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 5 05:22:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 61 KiB/s wr, 4 op/s Dec 5 05:22:10 localhost nova_compute[280228]: 2025-12-05 10:22:10.253 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:22:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:22:11 localhost podman[327153]: 2025-12-05 10:22:11.213430921 +0000 UTC m=+0.097831421 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:22:11 localhost podman[327153]: 2025-12-05 10:22:11.253913253 +0000 UTC m=+0.138313733 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:22:11 localhost podman[327154]: 2025-12-05 10:22:11.268207241 +0000 UTC m=+0.144663337 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:22:11 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:22:11 localhost podman[327154]: 2025-12-05 10:22:11.308856018 +0000 UTC m=+0.185312084 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:22:11 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:22:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 67 KiB/s wr, 4 op/s Dec 5 05:22:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:22:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:12 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' Dec 5 05:22:12 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta' Dec 5 05:22:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:12 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "format": "json"}]: dispatch Dec 5 05:22:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:12 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:22:12 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:22:12 localhost nova_compute[280228]: 2025-12-05 10:22:12.785 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.955 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.960 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd6642973-ac2b-4fd2-8a08-2006a520d216', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:12.956335', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43ca4948-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '07c826774a94312f473a38a1ee676c97b1a0f2fe64c9df32cec34a9b3767c750'}]}, 'timestamp': '2025-12-05 10:22:12.961427', '_unique_id': '13884cf0a45f4fd8b3e5655425ca7bb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.962 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.964 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.979 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.980 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af2df660-9145-4510-b321-324b54adb92e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:12.964418', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43cd2348-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.138724069, 'message_signature': '13092554b989972134876fd5114ca7f8de127c8ab41924f729ff93902f40fb02'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:12.964418', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43cd3680-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.138724069, 'message_signature': '5cfeb239b81aa9539cbdd581d21b3ff5bb2eaeffb678a3434c0c1eb015085674'}]}, 'timestamp': '2025-12-05 10:22:12.980500', '_unique_id': 'bf437b0ff6b6473caebc416163a106aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.981 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:12.982 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 20860000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cab49e44-c43c-42e8-be20-057469957130', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20860000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:22:12.983087', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '43d04d16-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.174424433, 'message_signature': '741450e9e6c65fb2486991e3d9c411dac4b8e2da2fa7e8352351729394929281'}]}, 'timestamp': '2025-12-05 10:22:13.000757', '_unique_id': '25899d11fa3e4b51bee8b786ddc47fbf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.001 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.002 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.003 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2dfc699e-bd52-493d-b996-a79b9d46a933', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.002998', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43d0b634-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '04ddd0f8e4d1b15e4433c9c96bb09f9884536f92888eac01cd6c13b4d562aa60'}]}, 'timestamp': '2025-12-05 10:22:13.003474', '_unique_id': '0e61ec5f29444ab99debbc9934e8fbd4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.004 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.005 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.005 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1a150941-4663-4d9c-bdc7-94c01f28dbca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.005731', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43d120a6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': 'ef465a9ed5c508123e585047cb5bc87977df3abc94df25ef29220091492c9472'}]}, 'timestamp': '2025-12-05 10:22:13.006170', '_unique_id': '6640d43a23f54ec1b920e4e90df42439'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.007 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.008 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.008 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '607c4166-38cc-476b-97e4-1beb21fa75b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.008211', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43d18302-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': 'cf67aff070e3fa1fa77834b26e23f8c1b1fb94cdcb831d45f3dd8702583a8475'}]}, 'timestamp': '2025-12-05 10:22:13.008859', '_unique_id': '3b629502716945b588f23b0d7f095e62'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.010 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.011 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.011 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e7a2259f-5ceb-4791-a7f4-02aa7f0a4498', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.011624', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43d206ec-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '5cbba9588e9c28fc090a879f2fb6a5c216d3d8ba10d0cb3bed1e8b2f623334b6'}]}, 'timestamp': '2025-12-05 10:22:13.012066', '_unique_id': '74cd2535708d4ceca7657c32064ebbb0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.013 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.014 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '42d0b6e0-7432-4687-a037-e6315fc792e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.014091', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43d2683a-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '6cdd5959338573b8ce2acc71094603331e7c56709e7e1bff96eee0a8fea217c3'}]}, 'timestamp': '2025-12-05 10:22:13.014556', '_unique_id': '634a6fa63fce4fde81cf80fb249472e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.015 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.016 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.052 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.053 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd04d79f9-cd6e-486e-b27f-c8358f5e03e3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.016581', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43d855ba-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': '1ccc66ba101f9a36044db41d8a0a6de6237e2406f4739507e99f109c18c64877'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.016581', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43d87194-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'bffaa60c0701c6bf697ce786adde2827e5381614393a1d143000dcdd12e6575b'}]}, 'timestamp': '2025-12-05 10:22:13.054184', '_unique_id': 'd63a16f72133474c8a21202b112a7ec8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.057 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.058 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '87c4b9b2-f7e4-47ea-8849-91a3ee4e74ef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.057206', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43d91626-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'e3f0b54b4f51b5ec01476d1e796df296ef63d8849bdbb9ec9e6f186c441e3403'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.057206', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43d930c0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'd5f760908c428602aa6141a1780e3c812a4a0cbd43784e8ed4c7373ffbfd194d'}]}, 'timestamp': '2025-12-05 10:22:13.059096', '_unique_id': 'aca275adb5034699bb60116ff8147fc9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.060 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.062 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.062 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9b205d1d-05b0-49e5-a497-e7ee04b04e22', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.062444', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43d9cd50-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '9f0d240cfc3d4dcda3bf92f485c7fdeddb6e7e695f1b1f335291042f7711dbe1'}]}, 'timestamp': '2025-12-05 10:22:13.063138', '_unique_id': '655c67d2d7ec4f489494a1f11704c721'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.064 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.066 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.067 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ad82a003-a2da-4dac-b375-87edce2592d5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.066567', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43da6e5e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'b42c6452e11dc4625b442514b7cf7eb6e9fdd0fc78fdb5c996792adb2873819c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.066567', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43da893e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'aa90f4a5e677a397a7420a06475d4b5c08d7fefc8514ad06b9c96680afd8d1d3'}]}, 'timestamp': '2025-12-05 10:22:13.067913', '_unique_id': '0a1c874d0b3a41ec844430733f62122c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.069 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.071 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.071 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '93f6bf2d-0fd7-47c0-8de7-458e3b73e15a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:22:13.071342', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '43db2920-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.174424433, 'message_signature': 'cfa8df81a2d917d67ea0c7fc0a99d8f545bdffae8615db9dce6f68fde607e9b0'}]}, 'timestamp': '2025-12-05 10:22:13.072015', '_unique_id': '4d48cb1d784844ffb641498cbb22e8d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.075 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '88e66dcd-4f3f-4bd1-86c7-6e0da981c920', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.075117', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43dbbe9e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '63d6cf3ea1a1adf2fe84d21130b73663eec33e74ad8614a993fe85e53f5f20ae'}]}, 'timestamp': '2025-12-05 10:22:13.075868', '_unique_id': 'b6798efec6ba46bb9d468742bcc5fa93'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.077 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.079 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.079 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cff8812d-464d-4ba4-ba48-57e60f50516f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.079426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43dc64b6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': '2017e3b855418fb8fa837ef9d4719d245bc88d8f606362198d2b040b24d80d36'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.079426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43dc7e10-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': '051fbce389a2cdeab458022e2524d87abf339cbb1a4a5f4ef843aea124f5091e'}]}, 'timestamp': '2025-12-05 10:22:13.080733', '_unique_id': 'e41a6c60694f44fbaa1dff18d5cef7f5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.083 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.084 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f1f3b94-25a1-45cd-b2f9-71297d3e9cdf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.083954', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43dd1730-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': '39fe6e09bc51a8ad6323ec7affb0b45afbfa72749e5fd261f389da6d4c887323'}]}, 'timestamp': '2025-12-05 10:22:13.084690', '_unique_id': '9fcd9669ed594bbab51b1ca2005cab20'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.085 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.087 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.087 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.088 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '966cd9e7-55bd-4634-8ea3-5b161edb1379', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.087804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43dda7ae-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'a3cb87e765fee162d4b1a023b1632d6bb4546e67f296b6c6718dc71b79f5076b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.087804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43ddb9ec-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': '9f51cda5c4dce8e2194047d015f702b7fd08ec52e1860d42ef619cd72ec86b94'}]}, 'timestamp': '2025-12-05 10:22:13.088716', '_unique_id': '696abe12afcc49eb9c3162e74e2d44d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.090 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.091 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.091 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ce2af14e-336c-4f96-9d5e-7494fe88ea6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.091016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43de27e2-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': 'd77d35386d62779f6cb3fd635ace04911714a0fa78c85d4c519b089dfc671b10'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.091016', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43de350c-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.190889618, 'message_signature': '913aec9e5d11d52a1ef095ee40592a3bd8beae869556872a71bb3c316adfd386'}]}, 'timestamp': '2025-12-05 10:22:13.091787', '_unique_id': '60b9a17634fe457f8e26c48a13534188'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.093 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.093 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.093 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '48b37208-d583-43f5-96f0-11527320fecb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.093146', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43de7558-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.138724069, 'message_signature': '240c617114cefed3005cd5308fc5a13b2cf10c7693005e80945615a903cfd230'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.093146', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43de7f4e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.138724069, 'message_signature': 'beed8595782f4c72cfee4703e72470951d279965503061cf7a92dd7bd16d7397'}]}, 'timestamp': '2025-12-05 10:22:13.093689', '_unique_id': 'fdf8e71a809d47c59da981587d74d077'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.094 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.095 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.095 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1dd00086-d2b9-44c9-a164-bd88ec10ff25', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:22:13.095055', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43debf72-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.138724069, 'message_signature': '95213b25efaca5d376dcab2c920c8f36bdbce0aad26d337d0f0fa269a84f4fc0'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:22:13.095055', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43deca30-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.138724069, 'message_signature': '35cca80ca0e029f30e89905b3b518d9cf54ee1ae552e63d977514292e8101dc7'}]}, 'timestamp': '2025-12-05 10:22:13.095606', '_unique_id': '121e3549ceb64af68a6fc32b30e824a8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.096 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20582fbd-5e2f-4584-af5a-70a22576a7b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:22:13.096988', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '43df0aea-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13248.130635441, 'message_signature': 'f3d3e4d6cd7db8632d2a563846bcd1e5b074df7520e49b60e31de72c2ac00615'}]}, 'timestamp': '2025-12-05 10:22:13.097325', '_unique_id': 'bef23f89e3ba476d9cfc30c4ab1918f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.097 12 ERROR oslo_messaging.notify.messaging Dec 5 05:22:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:22:13.098 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:22:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s Dec 5 05:22:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0", "format": "json"}]: dispatch Dec 5 05:22:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:22:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:22:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:22:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:22:15 localhost nova_compute[280228]: 2025-12-05 10:22:15.288 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s Dec 5 05:22:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:22:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:22:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e287 do_prune osdmap full prune enabled Dec 5 05:22:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 e288: 6 total, 6 up, 6 in Dec 5 05:22:16 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e288: 6 total, 6 up, 6 in Dec 5 05:22:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 3 op/s Dec 5 05:22:17 localhost nova_compute[280228]: 2025-12-05 10:22:17.822 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0", "target_sub_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, target_sub_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, vol_name:cephfs) < "" Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 0a676e21-b3c6-4aa9-b1bc-e6bc82242c2e for path b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, target_sub_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, vol_name:cephfs) < "" Dec 5 05:22:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.682+0000 7f9973843640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.682+0000 7f9973843640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.682+0000 7f9973843640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.682+0000 7f9973843640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.682+0000 7f9973843640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e) Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.725+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.725+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.725+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.725+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:22:18.725+0000 7f9974044640 -1 client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: client.0 error registering admin socket command: (17) File exists Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e) -- by 0 seconds Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' Dec 5 05:22:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta' Dec 5 05:22:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 3 op/s Dec 5 05:22:19 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e53: np0005546419.zhsnqq(active, since 20m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:22:19 localhost podman[239519]: time="2025-12-05T10:22:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:22:19 localhost podman[239519]: @ - - [05/Dec/2025:10:22:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:22:19 localhost podman[239519]: @ - - [05/Dec/2025:10:22:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19285 "" "Go-http-client/1.1" Dec 5 05:22:20 localhost nova_compute[280228]: 2025-12-05 10:22:20.291 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.snap/350bb740-9d29-4da1-bcf8-f780530c6ed0/fe40408d-d889-4fd8-9a51-34f4e6e9de35' to b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/93715c1a-e642-464e-af98-a52e2ba84b28' Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta' Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.clone_index] untracking 0a676e21-b3c6-4aa9-b1bc-e6bc82242c2e Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta' Dec 5 05:22:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 56 KiB/s wr, 4 op/s Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta.tmp' to config b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e/.meta' Dec 5 05:22:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e) Dec 5 05:22:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:22:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:22:21 localhost systemd[1]: tmp-crun.nrFVUG.mount: Deactivated successfully. Dec 5 05:22:21 localhost podman[327244]: 2025-12-05 10:22:21.939914779 +0000 UTC m=+0.099025049 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm) Dec 5 05:22:21 localhost podman[327243]: 2025-12-05 10:22:21.97879315 +0000 UTC m=+0.138249781 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:22:21 localhost podman[327244]: 2025-12-05 10:22:21.986864888 +0000 UTC m=+0.145975178 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal) Dec 5 05:22:22 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:22:22 localhost podman[327243]: 2025-12-05 10:22:22.045779375 +0000 UTC m=+0.205236026 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:22:22 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:22:22 localhost podman[327373]: 2025-12-05 10:22:22.727825334 +0000 UTC m=+0.091855368 container exec fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, name=rhceph, release=1763362218, version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 05:22:22 localhost nova_compute[280228]: 2025-12-05 10:22:22.860 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:22 localhost podman[327373]: 2025-12-05 10:22:22.863308009 +0000 UTC m=+0.227338033 container exec_died fbe8fb4e4282cb195ed7872ffa2a743443620186a5367b0d0886ce8c8fb69b0e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-crash-np0005546419, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Dec 5 05:22:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:22:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:22:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 4 op/s Dec 5 05:22:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:22:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:22:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:22:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:23 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:22:23 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:22:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:22:24 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:22:24 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:22:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:22:24 localhost ceph-mgr[286454]: [cephadm INFO root] Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:22:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 5 05:22:24 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:24 localhost ceph-mgr[286454]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:24 localhost ceph-mgr[286454]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:24 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 5c8d3e99-b134-46b2-ba2d-63b7505b839b (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:22:24 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 5c8d3e99-b134-46b2-ba2d-63b7505b839b (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:22:24 localhost ceph-mgr[286454]: [progress INFO root] Completed event 5c8d3e99-b134-46b2-ba2d-63b7505b839b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:22:24 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:22:24 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:22:25 localhost nova_compute[280228]: 2025-12-05 10:22:25.317 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:25 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546419.localdomain to 836.6M Dec 5 05:22:25 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546419.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 5 05:22:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 5 05:22:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 5 05:22:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 5 05:22:25 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546420.localdomain to 836.6M Dec 5 05:22:25 localhost ceph-mon[292820]: Adjusting osd_memory_target on np0005546421.localdomain to 836.6M Dec 5 05:22:25 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546420.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:25 localhost ceph-mon[292820]: Unable to set osd_memory_target on np0005546421.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 5 05:22:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:22:25 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 56 KiB/s wr, 4 op/s Dec 5 05:22:25 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:22:25 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:22:25 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:26 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:22:27 localhost openstack_network_exporter[241668]: ERROR 10:22:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:22:27 localhost openstack_network_exporter[241668]: ERROR 10:22:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:22:27 localhost openstack_network_exporter[241668]: ERROR 10:22:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:22:27 localhost openstack_network_exporter[241668]: ERROR 10:22:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:22:27 localhost openstack_network_exporter[241668]: Dec 5 05:22:27 localhost openstack_network_exporter[241668]: ERROR 10:22:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:22:27 localhost openstack_network_exporter[241668]: Dec 5 05:22:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v821: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 547 B/s rd, 50 KiB/s wr, 4 op/s Dec 5 05:22:27 localhost nova_compute[280228]: 2025-12-05 10:22:27.894 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.594 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.594 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.595 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.595 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:22:28 localhost nova_compute[280228]: 2025-12-05 10:22:28.595 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:22:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:22:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1029736738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.029 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.099 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.100 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.347 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.348 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11034MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.348 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.349 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.448 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.448 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.448 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.474 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:22:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 4 op/s Dec 5 05:22:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:22:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/567841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.898 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.905 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:22:29 localhost nova_compute[280228]: 2025-12-05 10:22:29.998 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:22:30 localhost nova_compute[280228]: 2025-12-05 10:22:30.001 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:22:30 localhost nova_compute[280228]: 2025-12-05 10:22:30.001 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:22:30 localhost nova_compute[280228]: 2025-12-05 10:22:30.320 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:31 localhost nova_compute[280228]: 2025-12-05 10:22:31.001 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 49 KiB/s wr, 5 op/s Dec 5 05:22:32 localhost nova_compute[280228]: 2025-12-05 10:22:32.945 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:22:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:22:33 localhost systemd[1]: tmp-crun.YpCUAS.mount: Deactivated successfully. Dec 5 05:22:33 localhost podman[327620]: 2025-12-05 10:22:33.220326116 +0000 UTC m=+0.099160792 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 5 05:22:33 localhost podman[327620]: 2025-12-05 10:22:33.25370059 +0000 UTC m=+0.132535206 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 5 05:22:33 localhost podman[327619]: 2025-12-05 10:22:33.263359557 +0000 UTC m=+0.142877644 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:22:33 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:22:33 localhost podman[327619]: 2025-12-05 10:22:33.276612552 +0000 UTC m=+0.156130639 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:22:33 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:22:33 localhost podman[327621]: 2025-12-05 10:22:33.366051325 +0000 UTC m=+0.240121805 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:22:33 localhost podman[327621]: 2025-12-05 10:22:33.403051961 +0000 UTC m=+0.277122451 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 5 05:22:33 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.509 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:22:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 33 KiB/s wr, 3 op/s Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.951 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.952 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.952 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:22:33 localhost nova_compute[280228]: 2025-12-05 10:22:33.952 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:22:34 localhost systemd[1]: tmp-crun.at5NO8.mount: Deactivated successfully. Dec 5 05:22:34 localhost nova_compute[280228]: 2025-12-05 10:22:34.306 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:22:34 localhost nova_compute[280228]: 2025-12-05 10:22:34.321 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:22:34 localhost nova_compute[280228]: 2025-12-05 10:22:34.322 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:22:35 localhost nova_compute[280228]: 2025-12-05 10:22:35.360 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s Dec 5 05:22:36 localhost nova_compute[280228]: 2025-12-05 10:22:36.318 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:36 localhost nova_compute[280228]: 2025-12-05 10:22:36.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:36 localhost nova_compute[280228]: 2025-12-05 10:22:36.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:36 localhost nova_compute[280228]: 2025-12-05 10:22:36.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:22:37 localhost nova_compute[280228]: 2025-12-05 10:22:37.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 46 KiB/s wr, 3 op/s Dec 5 05:22:37 localhost nova_compute[280228]: 2025-12-05 10:22:37.984 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:39 localhost nova_compute[280228]: 2025-12-05 10:22:39.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:22:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 1 op/s Dec 5 05:22:40 localhost nova_compute[280228]: 2025-12-05 10:22:40.365 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 1 op/s Dec 5 05:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:22:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:22:42 localhost systemd[1]: tmp-crun.FTMbyT.mount: Deactivated successfully. Dec 5 05:22:42 localhost podman[327677]: 2025-12-05 10:22:42.208689775 +0000 UTC m=+0.096468751 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:22:42 localhost podman[327677]: 2025-12-05 10:22:42.2528858 +0000 UTC m=+0.140664756 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller) Dec 5 05:22:42 localhost podman[327678]: 2025-12-05 10:22:42.26004951 +0000 UTC m=+0.145095832 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:22:42 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:22:42 localhost podman[327678]: 2025-12-05 10:22:42.300749778 +0000 UTC m=+0.185796100 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:22:42 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:22:43 localhost nova_compute[280228]: 2025-12-05 10:22:43.021 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:43 localhost systemd[1]: tmp-crun.oKRwPL.mount: Deactivated successfully. Dec 5 05:22:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 13 KiB/s wr, 1 op/s Dec 5 05:22:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:22:45 Dec 5 05:22:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:22:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:22:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['vms', 'backups', 'images', '.mgr', 'manila_data', 'volumes', 'manila_metadata'] Dec 5 05:22:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:22:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:22:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:22:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:22:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:22:45 localhost nova_compute[280228]: 2025-12-05 10:22:45.401 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 13 KiB/s wr, 1 op/s Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:22:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0022186671203238413 of space, bias 4.0, pg target 1.7660590277777777 quantized to 16 (current 16) Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:22:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:22:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:22:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:22:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s Dec 5 05:22:48 localhost nova_compute[280228]: 2025-12-05 10:22:48.071 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s wr, 0 op/s Dec 5 05:22:49 localhost podman[239519]: time="2025-12-05T10:22:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:22:49 localhost podman[239519]: @ - - [05/Dec/2025:10:22:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:22:49 localhost podman[239519]: @ - - [05/Dec/2025:10:22:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19289 "" "Go-http-client/1.1" Dec 5 05:22:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch Dec 5 05:22:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:50 localhost nova_compute[280228]: 2025-12-05 10:22:50.407 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch Dec 5 05:22:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, vol_name:cephfs) < "" Dec 5 05:22:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, vol_name:cephfs) < "" Dec 5 05:22:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:22:51 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:22:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s wr, 0 op/s Dec 5 05:22:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:22:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:22:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "format": "json"}]: dispatch Dec 5 05:22:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:52 localhost podman[327724]: 2025-12-05 10:22:52.1982164 +0000 UTC m=+0.081922124 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 5 05:22:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:22:52 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e", "force": true, "format": "json"}]: dispatch Dec 5 05:22:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, vol_name:cephfs) < "" Dec 5 05:22:52 localhost podman[327724]: 2025-12-05 10:22:52.208537376 +0000 UTC m=+0.092243070 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:22:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e'' moved to trashcan Dec 5 05:22:52 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:22:52 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2c43bdec-e79a-4b65-87dc-f0f5fc9a9f6e, vol_name:cephfs) < "" Dec 5 05:22:52 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:22:52 localhost systemd[1]: tmp-crun.IGHJTc.mount: Deactivated successfully. Dec 5 05:22:52 localhost podman[327725]: 2025-12-05 10:22:52.252285488 +0000 UTC m=+0.131819134 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9) Dec 5 05:22:52 localhost podman[327725]: 2025-12-05 10:22:52.260749507 +0000 UTC m=+0.140283103 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.) Dec 5 05:22:52 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:22:53 localhost nova_compute[280228]: 2025-12-05 10:22:53.111 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.7 KiB/s wr, 0 op/s Dec 5 05:22:55 localhost nova_compute[280228]: 2025-12-05 10:22:55.441 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.7 KiB/s wr, 0 op/s Dec 5 05:22:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:22:57 localhost openstack_network_exporter[241668]: ERROR 10:22:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:22:57 localhost openstack_network_exporter[241668]: ERROR 10:22:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:22:57 localhost openstack_network_exporter[241668]: ERROR 10:22:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:22:57 localhost openstack_network_exporter[241668]: ERROR 10:22:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:22:57 localhost openstack_network_exporter[241668]: Dec 5 05:22:57 localhost openstack_network_exporter[241668]: ERROR 10:22:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:22:57 localhost openstack_network_exporter[241668]: Dec 5 05:22:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0_1766b14d-8ab9-4c3d-b694-3340927e09ec", "force": true, "format": "json"}]: dispatch Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0_1766b14d-8ab9-4c3d-b694-3340927e09ec, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta' Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0_1766b14d-8ab9-4c3d-b694-3340927e09ec, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "snap_name": "350bb740-9d29-4da1-bcf8-f780530c6ed0", "force": true, "format": "json"}]: dispatch Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta.tmp' to config b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877/.meta' Dec 5 05:22:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:350bb740-9d29-4da1-bcf8-f780530c6ed0, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:22:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v836: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s Dec 5 05:22:58 localhost nova_compute[280228]: 2025-12-05 10:22:58.145 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:22:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 16 KiB/s wr, 0 op/s Dec 5 05:23:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "format": "json"}]: dispatch Dec 5 05:23:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8f516b01-442f-47dd-a7aa-85e4b5387877, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:23:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8f516b01-442f-47dd-a7aa-85e4b5387877, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:23:00 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:23:00.386+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8f516b01-442f-47dd-a7aa-85e4b5387877' of type subvolume Dec 5 05:23:00 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8f516b01-442f-47dd-a7aa-85e4b5387877' of type subvolume Dec 5 05:23:00 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8f516b01-442f-47dd-a7aa-85e4b5387877", "force": true, "format": "json"}]: dispatch Dec 5 05:23:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:23:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8f516b01-442f-47dd-a7aa-85e4b5387877'' moved to trashcan Dec 5 05:23:00 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:23:00 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8f516b01-442f-47dd-a7aa-85e4b5387877, vol_name:cephfs) < "" Dec 5 05:23:00 localhost nova_compute[280228]: 2025-12-05 10:23:00.445 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s Dec 5 05:23:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e288 do_prune osdmap full prune enabled Dec 5 05:23:02 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e289 e289: 6 total, 6 up, 6 in Dec 5 05:23:02 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e289: 6 total, 6 up, 6 in Dec 5 05:23:03 localhost nova_compute[280228]: 2025-12-05 10:23:03.184 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 52 KiB/s wr, 3 op/s Dec 5 05:23:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:23:03.930 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:23:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:23:03.930 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:23:03 localhost ovn_metadata_agent[158815]: 2025-12-05 10:23:03.931 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:23:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:23:04 localhost systemd[1]: tmp-crun.SCqD6C.mount: Deactivated successfully. Dec 5 05:23:04 localhost podman[327764]: 2025-12-05 10:23:04.186842376 +0000 UTC m=+0.074143414 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:23:04 localhost podman[327765]: 2025-12-05 10:23:04.243270707 +0000 UTC m=+0.125696615 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Dec 5 05:23:04 localhost podman[327764]: 2025-12-05 10:23:04.265851059 +0000 UTC m=+0.153152067 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:23:04 localhost podman[327766]: 2025-12-05 10:23:04.221193091 +0000 UTC m=+0.098249186 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 5 05:23:04 localhost podman[327765]: 2025-12-05 10:23:04.276543588 +0000 UTC m=+0.158969486 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:23:04 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:23:04 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:23:04 localhost podman[327766]: 2025-12-05 10:23:04.304695631 +0000 UTC m=+0.181751656 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:23:04 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:23:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:23:05.193 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:23:05 localhost ovn_metadata_agent[158815]: 2025-12-05 10:23:05.194 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:23:05 localhost nova_compute[280228]: 2025-12-05 10:23:05.196 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:05 localhost nova_compute[280228]: 2025-12-05 10:23:05.496 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v841: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 52 KiB/s wr, 3 op/s Dec 5 05:23:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 3 op/s Dec 5 05:23:08 localhost nova_compute[280228]: 2025-12-05 10:23:08.233 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 47 KiB/s wr, 3 op/s Dec 5 05:23:10 localhost nova_compute[280228]: 2025-12-05 10:23:10.499 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e289 do_prune osdmap full prune enabled Dec 5 05:23:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 e290: 6 total, 6 up, 6 in Dec 5 05:23:11 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e290: 6 total, 6 up, 6 in Dec 5 05:23:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 223 B/s rd, 26 KiB/s wr, 2 op/s Dec 5 05:23:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:23:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:23:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' Dec 5 05:23:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta' Dec 5 05:23:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:23:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "format": "json"}]: dispatch Dec 5 05:23:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:23:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:23:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:23:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:23:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:23:13 localhost ovn_metadata_agent[158815]: 2025-12-05 10:23:13.196 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:23:13 localhost podman[327821]: 2025-12-05 10:23:13.215847572 +0000 UTC m=+0.093841529 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:23:13 localhost podman[327821]: 2025-12-05 10:23:13.227739798 +0000 UTC m=+0.105733755 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:23:13 localhost nova_compute[280228]: 2025-12-05 10:23:13.271 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:13 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:23:13 localhost podman[327820]: 2025-12-05 10:23:13.317228802 +0000 UTC m=+0.196843939 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 5 05:23:13 localhost podman[327820]: 2025-12-05 10:23:13.358750335 +0000 UTC m=+0.238365422 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 05:23:13 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:23:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 24 KiB/s wr, 1 op/s Dec 5 05:23:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:23:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:23:15 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401", "format": "json"}]: dispatch Dec 5 05:23:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:23:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:23:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:23:15 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:23:15 localhost nova_compute[280228]: 2025-12-05 10:23:15.501 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 24 KiB/s wr, 1 op/s Dec 5 05:23:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:23:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:23:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s wr, 1 op/s Dec 5 05:23:18 localhost nova_compute[280228]: 2025-12-05 10:23:18.308 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401", "target_sub_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401, sub_name:8a959872-096f-4524-beb3-16ecf762162b, target_sub_name:57c122f3-1783-406d-a501-cfd05f2e9a11, vol_name:cephfs) < "" Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta' Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 52eb5e42-3287-4bb2-b118-7d095e9bb2dd for path b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11' Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta' Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401, sub_name:8a959872-096f-4524-beb3-16ecf762162b, target_sub_name:57c122f3-1783-406d-a501-cfd05f2e9a11, vol_name:cephfs) < "" Dec 5 05:23:18 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57c122f3-1783-406d-a501-cfd05f2e9a11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11 Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 57c122f3-1783-406d-a501-cfd05f2e9a11) Dec 5 05:23:18 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57c122f3-1783-406d-a501-cfd05f2e9a11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:23:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s wr, 1 op/s Dec 5 05:23:19 localhost podman[239519]: time="2025-12-05T10:23:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:23:19 localhost podman[239519]: @ - - [05/Dec/2025:10:23:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:23:19 localhost podman[239519]: @ - - [05/Dec/2025:10:23:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19282 "" "Go-http-client/1.1" Dec 5 05:23:20 localhost nova_compute[280228]: 2025-12-05 10:23:20.505 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 57c122f3-1783-406d-a501-cfd05f2e9a11) -- by 0 seconds Dec 5 05:23:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' Dec 5 05:23:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta' Dec 5 05:23:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s wr, 2 op/s Dec 5 05:23:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:23:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:23:23 localhost podman[327868]: 2025-12-05 10:23:23.202897952 +0000 UTC m=+0.082288504 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6) Dec 5 05:23:23 localhost systemd[1]: tmp-crun.O5I58M.mount: Deactivated successfully. Dec 5 05:23:23 localhost podman[327867]: 2025-12-05 10:23:23.263362167 +0000 UTC m=+0.145843264 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible) Dec 5 05:23:23 localhost podman[327868]: 2025-12-05 10:23:23.272000642 +0000 UTC m=+0.151391234 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc.) Dec 5 05:23:23 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:23:23 localhost podman[327867]: 2025-12-05 10:23:23.328801384 +0000 UTC m=+0.211282531 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 5 05:23:23 localhost nova_compute[280228]: 2025-12-05 10:23:23.338 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:23 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:23:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s Dec 5 05:23:25 localhost nova_compute[280228]: 2025-12-05 10:23:25.509 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s Dec 5 05:23:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.snap/cbd95e9a-4741-4e3c-931e-7bcd695d1401/ea978ea6-59e1-4eed-996d-b38b4486de97' to b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/46695c54-e515-4c9b-b7cc-8714d9b2e1bf' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.clone_index] untracking 52eb5e42-3287-4bb2-b118-7d095e9bb2dd Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta.tmp' to config b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11/.meta' Dec 5 05:23:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 57c122f3-1783-406d-a501-cfd05f2e9a11) Dec 5 05:23:26 localhost podman[328051]: Dec 5 05:23:26 localhost podman[328051]: 2025-12-05 10:23:26.623296107 +0000 UTC m=+0.068436029 container create 1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_robinson, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4) Dec 5 05:23:26 localhost systemd[1]: Started libpod-conmon-1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb.scope. Dec 5 05:23:26 localhost systemd[1]: Started libcrun container. Dec 5 05:23:26 localhost podman[328051]: 2025-12-05 10:23:26.592952207 +0000 UTC m=+0.038092229 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:23:26 localhost podman[328051]: 2025-12-05 10:23:26.702536348 +0000 UTC m=+0.147676280 container init 1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_robinson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 05:23:26 localhost podman[328051]: 2025-12-05 10:23:26.714127584 +0000 UTC m=+0.159267506 container start 1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_robinson, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 05:23:26 localhost podman[328051]: 2025-12-05 10:23:26.714454684 +0000 UTC m=+0.159594646 container attach 1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_robinson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:23:26 localhost systemd[1]: libpod-1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb.scope: Deactivated successfully. Dec 5 05:23:26 localhost clever_robinson[328066]: 167 167 Dec 5 05:23:26 localhost podman[328051]: 2025-12-05 10:23:26.721874912 +0000 UTC m=+0.167014864 container died 1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_robinson, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Dec 5 05:23:26 localhost podman[328071]: 2025-12-05 10:23:26.824101597 +0000 UTC m=+0.092766246 container remove 1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_robinson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, architecture=x86_64, version=7, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 5 05:23:26 localhost systemd[1]: libpod-conmon-1b92f84177a448207a1fa8976a90b77d68a5d3ccd47503af9bb8c230ecf3febb.scope: Deactivated successfully. Dec 5 05:23:27 localhost podman[328093]: Dec 5 05:23:27 localhost podman[328093]: 2025-12-05 10:23:27.05734122 +0000 UTC m=+0.081772249 container create 2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_hodgkin, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 5 05:23:27 localhost systemd[1]: Started libpod-conmon-2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751.scope. Dec 5 05:23:27 localhost podman[328093]: 2025-12-05 10:23:27.024851354 +0000 UTC m=+0.049282433 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 5 05:23:27 localhost systemd[1]: Started libcrun container. Dec 5 05:23:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d5ebac0f3716ecc00c20856d2a41c583445186e5503b6c2ad25f1f3c81ba5/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 5 05:23:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d5ebac0f3716ecc00c20856d2a41c583445186e5503b6c2ad25f1f3c81ba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 5 05:23:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d5ebac0f3716ecc00c20856d2a41c583445186e5503b6c2ad25f1f3c81ba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 5 05:23:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d5ebac0f3716ecc00c20856d2a41c583445186e5503b6c2ad25f1f3c81ba5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 5 05:23:27 localhost podman[328093]: 2025-12-05 10:23:27.146285648 +0000 UTC m=+0.170716687 container init 2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_hodgkin, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 5 05:23:27 localhost podman[328093]: 2025-12-05 10:23:27.158611096 +0000 UTC m=+0.183042125 container start 2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_hodgkin, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main) Dec 5 05:23:27 localhost podman[328093]: 2025-12-05 10:23:27.158881464 +0000 UTC m=+0.183312503 container attach 2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_hodgkin, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 5 05:23:27 localhost openstack_network_exporter[241668]: ERROR 10:23:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:23:27 localhost openstack_network_exporter[241668]: ERROR 10:23:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:23:27 localhost openstack_network_exporter[241668]: ERROR 10:23:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:23:27 localhost openstack_network_exporter[241668]: ERROR 10:23:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:23:27 localhost openstack_network_exporter[241668]: Dec 5 05:23:27 localhost openstack_network_exporter[241668]: ERROR 10:23:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:23:27 localhost openstack_network_exporter[241668]: Dec 5 05:23:27 localhost systemd[1]: var-lib-containers-storage-overlay-ff8e79cd07cd35420504ed3adeeaf471eda08fe833d8e3618cd0bfe695b18817-merged.mount: Deactivated successfully. Dec 5 05:23:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 340 B/s rd, 52 KiB/s wr, 3 op/s Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: [ Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: { Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "available": false, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "ceph_device": false, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "lsm_data": {}, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "lvs": [], Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "path": "/dev/sr0", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "rejected_reasons": [ Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "Has a FileSystem", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "Insufficient space (<5GB)" Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: ], Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "sys_api": { Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "actuators": null, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "device_nodes": "sr0", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "human_readable_size": "482.00 KB", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "id_bus": "ata", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "model": "QEMU DVD-ROM", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "nr_requests": "2", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "partitions": {}, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "path": "/dev/sr0", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "removable": "1", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "rev": "2.5+", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "ro": "0", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "rotational": "1", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "sas_address": "", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "sas_device_handle": "", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "scheduler_mode": "mq-deadline", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "sectors": 0, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "sectorsize": "2048", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "size": 493568.0, Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "support_discard": "0", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "type": "disk", Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: "vendor": "QEMU" Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: } Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: } Dec 5 05:23:28 localhost flamboyant_hodgkin[328107]: ] Dec 5 05:23:28 localhost systemd[1]: libpod-2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751.scope: Deactivated successfully. Dec 5 05:23:28 localhost podman[328093]: 2025-12-05 10:23:28.214362706 +0000 UTC m=+1.238793765 container died 2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_hodgkin, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=1763362218, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7) Dec 5 05:23:28 localhost systemd[1]: libpod-2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751.scope: Consumed 1.080s CPU time. Dec 5 05:23:28 localhost systemd[1]: var-lib-containers-storage-overlay-032d5ebac0f3716ecc00c20856d2a41c583445186e5503b6c2ad25f1f3c81ba5-merged.mount: Deactivated successfully. Dec 5 05:23:28 localhost podman[330087]: 2025-12-05 10:23:28.32858583 +0000 UTC m=+0.096517771 container remove 2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_hodgkin, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1763362218, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:23:28 localhost systemd[1]: libpod-conmon-2d2ee71412bf463236ad24a131a64d6fe51089b2af10a76805f6e3fb7992b751.scope: Deactivated successfully. Dec 5 05:23:28 localhost nova_compute[280228]: 2025-12-05 10:23:28.341 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:23:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:23:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:23:28 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:23:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:23:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:23:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:23:28 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:28 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev fc66ae67-459f-4316-a497-1ab3e68a0dbb (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:23:28 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev fc66ae67-459f-4316-a497-1ab3e68a0dbb (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:23:28 localhost ceph-mgr[286454]: [progress INFO root] Completed event fc66ae67-459f-4316-a497-1ab3e68a0dbb (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:23:28 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:23:28 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:23:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:23:28 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.531 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.532 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.532 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.532 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.533 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:23:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 340 B/s rd, 35 KiB/s wr, 2 op/s Dec 5 05:23:29 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:23:29 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2154872146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:23:29 localhost nova_compute[280228]: 2025-12-05 10:23:29.980 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.266 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.267 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.483 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.485 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11027MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.485 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.486 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.512 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.563 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.563 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.564 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:23:30 localhost nova_compute[280228]: 2025-12-05 10:23:30.622 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:23:30 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:23:30 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:23:30 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:30 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:23:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:23:31 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1122658377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:23:31 localhost nova_compute[280228]: 2025-12-05 10:23:31.089 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:23:31 localhost nova_compute[280228]: 2025-12-05 10:23:31.094 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:23:31 localhost nova_compute[280228]: 2025-12-05 10:23:31.117 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:23:31 localhost nova_compute[280228]: 2025-12-05 10:23:31.119 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:23:31 localhost nova_compute[280228]: 2025-12-05 10:23:31.119 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:23:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 53 KiB/s wr, 4 op/s Dec 5 05:23:32 localhost nova_compute[280228]: 2025-12-05 10:23:32.120 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:32 localhost nova_compute[280228]: 2025-12-05 10:23:32.120 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:32 localhost nova_compute[280228]: 2025-12-05 10:23:32.502 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:33 localhost nova_compute[280228]: 2025-12-05 10:23:33.343 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 43 KiB/s wr, 3 op/s Dec 5 05:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:23:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:23:35 localhost systemd[1]: tmp-crun.o7V92C.mount: Deactivated successfully. Dec 5 05:23:35 localhost podman[330164]: 2025-12-05 10:23:35.221046523 +0000 UTC m=+0.098278255 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:23:35 localhost podman[330164]: 2025-12-05 10:23:35.254161659 +0000 UTC m=+0.131393341 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent) Dec 5 05:23:35 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:23:35 localhost podman[330163]: 2025-12-05 10:23:35.312380845 +0000 UTC m=+0.190281207 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:23:35 localhost podman[330165]: 2025-12-05 10:23:35.370982982 +0000 UTC m=+0.243862620 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:23:35 localhost podman[330165]: 2025-12-05 10:23:35.381615129 +0000 UTC m=+0.254494797 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3) Dec 5 05:23:35 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:23:35 localhost podman[330163]: 2025-12-05 10:23:35.399181127 +0000 UTC m=+0.277081509 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:23:35 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.516 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 43 KiB/s wr, 3 op/s Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.954 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.955 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.955 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:23:35 localhost nova_compute[280228]: 2025-12-05 10:23:35.955 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:23:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:36 localhost nova_compute[280228]: 2025-12-05 10:23:36.991 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:23:37 localhost nova_compute[280228]: 2025-12-05 10:23:37.008 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:23:37 localhost nova_compute[280228]: 2025-12-05 10:23:37.008 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:23:37 localhost nova_compute[280228]: 2025-12-05 10:23:37.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 53 KiB/s wr, 4 op/s Dec 5 05:23:38 localhost nova_compute[280228]: 2025-12-05 10:23:38.347 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:38 localhost nova_compute[280228]: 2025-12-05 10:23:38.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:38 localhost nova_compute[280228]: 2025-12-05 10:23:38.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:38 localhost nova_compute[280228]: 2025-12-05 10:23:38.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:23:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 28 KiB/s wr, 2 op/s Dec 5 05:23:40 localhost nova_compute[280228]: 2025-12-05 10:23:40.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:23:40 localhost nova_compute[280228]: 2025-12-05 10:23:40.519 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 36 KiB/s wr, 3 op/s Dec 5 05:23:43 localhost nova_compute[280228]: 2025-12-05 10:23:43.350 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 18 KiB/s wr, 1 op/s Dec 5 05:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:23:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:23:44 localhost podman[330223]: 2025-12-05 10:23:44.191663998 +0000 UTC m=+0.078325193 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:23:44 localhost podman[330224]: 2025-12-05 10:23:44.271318371 +0000 UTC m=+0.147756603 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:23:44 localhost podman[330224]: 2025-12-05 10:23:44.28171066 +0000 UTC m=+0.158148922 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:23:44 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:23:44 localhost podman[330223]: 2025-12-05 10:23:44.305802178 +0000 UTC m=+0.192463363 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 5 05:23:44 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:23:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:23:45 Dec 5 05:23:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:23:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:23:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'backups', 'manila_data', 'images', 'manila_metadata'] Dec 5 05:23:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:23:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:23:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:23:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:23:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:23:45 localhost nova_compute[280228]: 2025-12-05 10:23:45.545 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 18 KiB/s wr, 1 op/s Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:23:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002340532959938582 of space, bias 4.0, pg target 1.8630642361111114 quantized to 16 (current 16) Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:23:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:23:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:23:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:23:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 18 KiB/s wr, 1 op/s Dec 5 05:23:48 localhost nova_compute[280228]: 2025-12-05 10:23:48.353 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 7.7 KiB/s wr, 0 op/s Dec 5 05:23:49 localhost podman[239519]: time="2025-12-05T10:23:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:23:49 localhost podman[239519]: @ - - [05/Dec/2025:10:23:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:23:49 localhost podman[239519]: @ - - [05/Dec/2025:10:23:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19278 "" "Go-http-client/1.1" Dec 5 05:23:49 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch Dec 5 05:23:49 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57c122f3-1783-406d-a501-cfd05f2e9a11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:23:50 localhost nova_compute[280228]: 2025-12-05 10:23:50.548 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57c122f3-1783-406d-a501-cfd05f2e9a11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:23:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:51 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch Dec 5 05:23:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57c122f3-1783-406d-a501-cfd05f2e9a11, vol_name:cephfs) < "" Dec 5 05:23:51 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57c122f3-1783-406d-a501-cfd05f2e9a11, vol_name:cephfs) < "" Dec 5 05:23:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:23:51 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:23:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 7.7 KiB/s wr, 0 op/s Dec 5 05:23:53 localhost nova_compute[280228]: 2025-12-05 10:23:53.396 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s wr, 0 op/s Dec 5 05:23:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:23:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:23:54 localhost podman[330268]: 2025-12-05 10:23:54.193241222 +0000 UTC m=+0.081616684 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible) Dec 5 05:23:54 localhost podman[330268]: 2025-12-05 10:23:54.231698662 +0000 UTC m=+0.120074074 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 5 05:23:54 localhost podman[330269]: 2025-12-05 10:23:54.243463073 +0000 UTC m=+0.129617907 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 05:23:54 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:23:54 localhost podman[330269]: 2025-12-05 10:23:54.260751403 +0000 UTC m=+0.146906247 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:23:54 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:23:55 localhost nova_compute[280228]: 2025-12-05 10:23:55.552 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s wr, 0 op/s Dec 5 05:23:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:23:57 localhost openstack_network_exporter[241668]: ERROR 10:23:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:23:57 localhost openstack_network_exporter[241668]: ERROR 10:23:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:23:57 localhost openstack_network_exporter[241668]: ERROR 10:23:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:23:57 localhost openstack_network_exporter[241668]: ERROR 10:23:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:23:57 localhost openstack_network_exporter[241668]: Dec 5 05:23:57 localhost openstack_network_exporter[241668]: ERROR 10:23:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:23:57 localhost openstack_network_exporter[241668]: Dec 5 05:23:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.0 KiB/s wr, 0 op/s Dec 5 05:23:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:23:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:23:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a85e7b5b-d7ad-447d-9406-7e6c08cbb29a/.meta.tmp' Dec 5 05:23:57 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a85e7b5b-d7ad-447d-9406-7e6c08cbb29a/.meta.tmp' to config b'/volumes/_nogroup/a85e7b5b-d7ad-447d-9406-7e6c08cbb29a/.meta' Dec 5 05:23:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:23:57 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "format": "json"}]: dispatch Dec 5 05:23:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:23:57 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:23:57 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:23:57 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:23:58 localhost nova_compute[280228]: 2025-12-05 10:23:58.433 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:23:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.9 KiB/s wr, 0 op/s Dec 5 05:24:00 localhost nova_compute[280228]: 2025-12-05 10:24:00.555 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0. Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.775463) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240775505, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1769, "num_deletes": 253, "total_data_size": 1935734, "memory_usage": 1971328, "flush_reason": "Manual Compaction"} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240788541, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1879954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42371, "largest_seqno": 44139, "table_properties": {"data_size": 1872568, "index_size": 4279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17200, "raw_average_key_size": 21, "raw_value_size": 1857133, "raw_average_value_size": 2304, "num_data_blocks": 184, "num_entries": 806, "num_filter_entries": 806, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930111, "oldest_key_time": 1764930111, "file_creation_time": 1764930240, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13148 microseconds, and 5469 cpu microseconds. Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.788607) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1879954 bytes OK Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.788640) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.790694) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.790721) EVENT_LOG_v1 {"time_micros": 1764930240790713, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.790746) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1928066, prev total WAL file size 1928066, number of live WAL files 2. Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.791569) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1835KB)], [78(17MB)] Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240791638, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 19869661, "oldest_snapshot_seqno": -1} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 14521 keys, 18558652 bytes, temperature: kUnknown Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240893975, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 18558652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18475387, "index_size": 45852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 388551, "raw_average_key_size": 26, "raw_value_size": 18228656, "raw_average_value_size": 1255, "num_data_blocks": 1710, "num_entries": 14521, "num_filter_entries": 14521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764930240, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.894398) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 18558652 bytes Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.896441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.9 rd, 181.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 17.2 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(20.4) write-amplify(9.9) OK, records in: 15056, records dropped: 535 output_compression: NoCompression Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.896469) EVENT_LOG_v1 {"time_micros": 1764930240896456, "job": 48, "event": "compaction_finished", "compaction_time_micros": 102470, "compaction_time_cpu_micros": 52417, "output_level": 6, "num_output_files": 1, "total_output_size": 18558652, "num_input_records": 15056, "num_output_records": 14521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240896841, "job": 48, "event": "table_file_deletion", "file_number": 80} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930240899176, "job": 48, "event": "table_file_deletion", "file_number": 78} Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.791481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.899425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.899434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.899437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.899440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:24:00 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:24:00.899443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:24:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s Dec 5 05:24:01 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "new_size": 2147483648, "format": "json"}]: dispatch Dec 5 05:24:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:24:01 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:24:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:24:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4162796722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:24:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:24:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4162796722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:24:03 localhost nova_compute[280228]: 2025-12-05 10:24:03.478 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s Dec 5 05:24:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:24:04.580 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:24:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:24:04.581 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:24:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:24:04.582 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:24:05 localhost nova_compute[280228]: 2025-12-05 10:24:05.558 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "format": "json"}]: dispatch Dec 5 05:24:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:05 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a85e7b5b-d7ad-447d-9406-7e6c08cbb29a' of type subvolume Dec 5 05:24:05 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:24:05.585+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a85e7b5b-d7ad-447d-9406-7e6c08cbb29a' of type subvolume Dec 5 05:24:05 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a85e7b5b-d7ad-447d-9406-7e6c08cbb29a", "force": true, "format": "json"}]: dispatch Dec 5 05:24:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:24:05 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a85e7b5b-d7ad-447d-9406-7e6c08cbb29a'' moved to trashcan Dec 5 05:24:05 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:24:05 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a85e7b5b-d7ad-447d-9406-7e6c08cbb29a, vol_name:cephfs) < "" Dec 5 05:24:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 225 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s Dec 5 05:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:24:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:24:06 localhost systemd[1]: tmp-crun.uMCi66.mount: Deactivated successfully. Dec 5 05:24:06 localhost systemd[1]: tmp-crun.Sg401O.mount: Deactivated successfully. Dec 5 05:24:06 localhost podman[330308]: 2025-12-05 10:24:06.227053296 +0000 UTC m=+0.104826475 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:24:06 localhost podman[330308]: 2025-12-05 10:24:06.241559412 +0000 UTC m=+0.119332591 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, tcib_managed=true) Dec 5 05:24:06 localhost podman[330306]: 2025-12-05 10:24:06.203531205 +0000 UTC m=+0.092248720 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:24:06 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:24:06 localhost podman[330307]: 2025-12-05 10:24:06.267748465 +0000 UTC m=+0.150066854 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:24:06 localhost podman[330306]: 2025-12-05 10:24:06.286678696 +0000 UTC m=+0.175396231 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:24:06 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:24:06 localhost podman[330307]: 2025-12-05 10:24:06.301622394 +0000 UTC m=+0.183940713 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent) Dec 5 05:24:06 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:24:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 34 KiB/s wr, 1 op/s Dec 5 05:24:08 localhost nova_compute[280228]: 2025-12-05 10:24:08.518 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s wr, 0 op/s Dec 5 05:24:10 localhost nova_compute[280228]: 2025-12-05 10:24:10.562 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:24:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/16e7fa2f-3ef8-4c00-9984-3af47fc17b5e/.meta.tmp' Dec 5 05:24:11 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/16e7fa2f-3ef8-4c00-9984-3af47fc17b5e/.meta.tmp' to config b'/volumes/_nogroup/16e7fa2f-3ef8-4c00-9984-3af47fc17b5e/.meta' Dec 5 05:24:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:11 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "format": "json"}]: dispatch Dec 5 05:24:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:11 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:24:11 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:24:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.955 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.955 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.960 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43f863c1-2041-49bf-8dab-9743dacd9362', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:12.955896', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b50bdf6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '190c33218254528f07184040f8f1f5bf904b4bb7d63e4d07c66e75fd96f3a27b'}]}, 'timestamp': '2025-12-05 10:24:12.960708', '_unique_id': 'b03f833a002244edaefda07686d56dd7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.961 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.962 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.978 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 21470000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '853a9090-bb6d-4ab9-a441-ebb76af13070', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 21470000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:24:12.962616', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '8b538f40-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.152745972, 'message_signature': '6e1d8179bdafa315e8262037e596ab4d91135123aee57b8883f49934ad8419d3'}]}, 'timestamp': '2025-12-05 10:24:12.979298', '_unique_id': 'cef5a1832bca489db3701520f6428255'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.980 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:12.981 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.016 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.017 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67e7119f-1866-40ee-9120-1636e6421b27', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:12.982034', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b596758-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': '96ffb24949f9ef6068e54c2d9bac66d5c42777fecbd3ee73283b7ac62ef46039'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:12.982034', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5979a0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': 'ebc36fcd7606e8bff95cd274b1711dbd530cc062b1f36074697a0d429f9f7e03'}]}, 'timestamp': '2025-12-05 10:24:13.017956', '_unique_id': 'ec193d9b785141949e5fba46c06b3720'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.019 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.020 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.020 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a0099694-cd0d-4dfc-be5b-778cc7ff224a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:24:13.020486', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '8b59ed2c-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.152745972, 'message_signature': '1a7281cee2c5e13957c6657f7601aa433fbc033d6d06f2c908068f0e3a665a35'}]}, 'timestamp': '2025-12-05 10:24:13.020910', '_unique_id': 'd9184393c4d24eff9dd75bbdf638598d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.021 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.022 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.023 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.023 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd6e5244a-76d1-4f9f-a2cb-6bfe2b64ca08', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.022968', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5a4dd0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': 'eb93bbda2bbea6e23d2fb91c4fcc7476c92490b3d620f0fbaf30ec612dc90750'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.022968', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5a5ec4-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': '519e8f7981da2a05fbbae6b4e4a737046a849e31903d6fc8ac7efa5011841094'}]}, 'timestamp': '2025-12-05 10:24:13.023805', '_unique_id': '78ae0d93019a4d238be1aab123c8c220'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.024 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.025 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.036 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '30f01766-dbca-4692-9dc1-ef59d6ca4547', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.025883', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5c55d0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.200202456, 'message_signature': 'b816623b2a9a3995690425ca977de3bea7cdeba6367a9f63851c62bd4ba5fd06'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.025883', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5c65e8-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.200202456, 'message_signature': '76225dea00db53f5368ac93112daacc548d432b85f991ac899027ce70d0b21c9'}]}, 'timestamp': '2025-12-05 10:24:13.037206', '_unique_id': '1c23b2e22d5f4ab4938b34cc6dea703c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.038 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.039 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.039 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '35626aba-a61a-4e7c-a6ea-d36570fa03ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.039426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5cd0d2-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': 'e346975f633375e243bac1916b53faf1fb22caa4a3ba9fba4dea30c00e35cc87'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.039426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5ce02c-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': 'e74e7ffb6fac6aedea6ba031aaa86e01bde261f358501658b34536d97cd41110'}]}, 'timestamp': '2025-12-05 10:24:13.040240', '_unique_id': '3218d3ccbf654dc5abf9fd84e8cb9df4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.041 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.042 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.042 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.042 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6694cf6c-3114-4860-992d-ca8d5a49a40c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.042939', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b5d5ab6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': 'bceaa6e8d925d2a2b7dc9079cff289c2d6aec546dffc098f0e219aeed6489a57'}]}, 'timestamp': '2025-12-05 10:24:13.043438', '_unique_id': 'fef8138afd11437d92e3510280a42867'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.044 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.045 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.045 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e7adf346-3688-4d73-a6ea-3612cd088da9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.045691', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b5dc604-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '9a1e856cbaf3c5edc4750d45e31541aa9432d8a20516405f65011d3e9b024b34'}]}, 'timestamp': '2025-12-05 10:24:13.046142', '_unique_id': 'f2568fa65af7403eb1b986d1329b3cb4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.047 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.048 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.048 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ada67f5f-063b-420d-b807-7f861de4d998', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.048138', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5e2612-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': '01caa0f23fccf288dd0b1dd983b9145bc888fdb0e1f2deaa8f423c62288bbd62'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.048138', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5e353a-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': '870cd86896a3efab93770700d262e27f8052cb365c3d7205e493f74e145d88c5'}]}, 'timestamp': '2025-12-05 10:24:13.048957', '_unique_id': 'a846de127e554db9b6b6fd3d68c4f409'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.050 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.051 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.051 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0fb5acee-ca5e-40e6-9cdd-eee7934e5d02', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.051043', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5e9764-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.200202456, 'message_signature': '77bcac6b04596ea913864af0cca6fa840e084f892698a6fec0071cb1f706a784'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.051043', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5ea6aa-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.200202456, 'message_signature': 'bcf83dcafcfc498505a53c831c67e234970605ba41d53ce2db7179c862e19fc0'}]}, 'timestamp': '2025-12-05 10:24:13.051859', '_unique_id': 'c2d856a20ec14f978f4cb4391a825377'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.052 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.053 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.053 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.054 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ed29cbfc-5b77-4087-aa36-6af449f52259', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.053897', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5f05c8-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': 'd8fe4dc156bcd43e9f611572d02a2623d950b391f7e5a68ea7ba4e434d897713'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.053897', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5f161c-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': 'da2f2d762b3211001eac89c630d0e6e9f7950c59ac8a2a1fdd759dbcb24cff20'}]}, 'timestamp': '2025-12-05 10:24:13.054714', '_unique_id': '99c655c458274c16984676b0bae85d09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.055 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.056 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.056 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.057 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a5532dd-1a9c-49d0-83e7-d75150c97230', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.056741', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b5f74d6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': '342423904642922b730a9625dea12c68134491542417bd03b73301a09aba2fd4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.056741', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b5f85c0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.156362032, 'message_signature': '3993c5eb2cf1f572f01daf353e45c544fb6ecf9a90178fa5263c1473e890165d'}]}, 'timestamp': '2025-12-05 10:24:13.057574', '_unique_id': '30ba3df0fe7f41e7b5c3873c76333e23'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.058 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.060 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.060 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ee1b929-6144-4fdc-b32b-acde766a02c1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.060217', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b6003c4-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '4b07a090f5cc29500c52adcd0dc66199f35a97fcb467b725a0f94a6d3e2fa4c3'}]}, 'timestamp': '2025-12-05 10:24:13.060908', '_unique_id': '9fb94208e1f846e48b4c4bf470edfad1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.062 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.063 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.064 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f3660ce3-2007-4149-95fd-4628d56ce033', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.064007', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b6091f4-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '4d9ba203873aee758fb1623a94d625e8ea3312e6b5d85daedfdf8a383c28786e'}]}, 'timestamp': '2025-12-05 10:24:13.064508', '_unique_id': '60fa0b9a8a1f4dabbb5f8651d5959449'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.065 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.066 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.066 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8717b36f-502c-4e15-8c3f-82e900bf3b92', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.066799', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b60ff2c-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '1a1e79261096fa8e6081895ca52418d5b96faf811870c056aa0795105fe8f4ae'}]}, 'timestamp': '2025-12-05 10:24:13.067329', '_unique_id': '671aee57518d4c00b3d4d5b0384c561a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.068 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.069 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03f38286-e6a7-439c-bdbb-654817da0b65', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.069525', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b616976-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': 'd48cc221435e24322a6c835e21162e5e20f8cc20efb1c8b1b5d105279ec3e0c3'}]}, 'timestamp': '2025-12-05 10:24:13.069993', '_unique_id': 'ad8b45d7e70f4644bb56b9106d4e45b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.070 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.072 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '26ce4f4a-4eb5-4692-a930-1fcaf28519dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.072308', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b61d7d0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '53537520dfa400c069bc375045da05660dffd2b4126b03690aee7bb1c463cf19'}]}, 'timestamp': '2025-12-05 10:24:13.072820', '_unique_id': 'd03ce6888bf3466ebf1792444a7aa859'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.073 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.074 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.075 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13bd1741-7933-4d43-8aeb-4eff3f022118', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.075019', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b62418e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': '6926704a82260f6ddb9adde902a4201d2527d298d7bad50a54c6ff61d4030ab7'}]}, 'timestamp': '2025-12-05 10:24:13.075529', '_unique_id': '3560fe0c7da84c20af48b8ef9e0ca0f3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.076 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.077 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.077 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd777866f-72b6-4c35-8928-96e6739895b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:24:13.077667', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': '8b62a75a-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.130170589, 'message_signature': 'f39ff209eb096c3e6ce99477da537afb9f44d26ad3a3bed2a0922d0e2f6ef7d9'}]}, 'timestamp': '2025-12-05 10:24:13.078129', '_unique_id': 'a1721e51afe94199ad289784e6486cc3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost systemd-journald[47252]: Data hash table of /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal has a fill level at 75.0 (53724 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Dec 5 05:24:13 localhost systemd-journald[47252]: /run/log/journal/d70e7573f9252a22999953aab4dc4dc5/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.079 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.080 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.080 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97c69a19-a97d-426d-b498-1f72de67eaa1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:24:13.080472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8b631528-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.200202456, 'message_signature': '46e460e6c092597bf919e0077fcea8682fe146c2cd0a90f73fbbf47c092f2374'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:24:13.080472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8b6326a8-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13368.200202456, 'message_signature': 'aae7c5737ed7ff36ecc64efc97d7f3eead5ec315f85b0f7f0495b66d57fb826f'}]}, 'timestamp': '2025-12-05 10:24:13.081396', '_unique_id': '2ba4e2d38d324013bdf10a994384a638'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:24:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:24:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:24:13 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 5 05:24:13 localhost nova_compute[280228]: 2025-12-05 10:24:13.546 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 42 KiB/s wr, 2 op/s Dec 5 05:24:14 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Dec 5 05:24:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:14 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:24:14 localhost podman[330365]: 2025-12-05 10:24:14.406474045 +0000 UTC m=+0.085968787 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:24:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:24:14 localhost podman[330365]: 2025-12-05 10:24:14.440769707 +0000 UTC m=+0.120264409 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:24:14 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:24:14 localhost podman[330387]: 2025-12-05 10:24:14.507192384 +0000 UTC m=+0.077008042 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 05:24:14 localhost podman[330387]: 2025-12-05 10:24:14.564913905 +0000 UTC m=+0.134729573 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:24:14 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:24:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:24:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:24:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:24:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:24:15 localhost nova_compute[280228]: 2025-12-05 10:24:15.566 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 42 KiB/s wr, 2 op/s Dec 5 05:24:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:24:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:24:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 55 KiB/s wr, 2 op/s Dec 5 05:24:17 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "format": "json"}]: dispatch Dec 5 05:24:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:17 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:24:17.741+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16e7fa2f-3ef8-4c00-9984-3af47fc17b5e' of type subvolume Dec 5 05:24:17 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16e7fa2f-3ef8-4c00-9984-3af47fc17b5e' of type subvolume Dec 5 05:24:17 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16e7fa2f-3ef8-4c00-9984-3af47fc17b5e", "force": true, "format": "json"}]: dispatch Dec 5 05:24:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:17 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/16e7fa2f-3ef8-4c00-9984-3af47fc17b5e'' moved to trashcan Dec 5 05:24:17 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:24:17 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16e7fa2f-3ef8-4c00-9984-3af47fc17b5e, vol_name:cephfs) < "" Dec 5 05:24:18 localhost nova_compute[280228]: 2025-12-05 10:24:18.583 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 35 KiB/s wr, 2 op/s Dec 5 05:24:19 localhost podman[239519]: time="2025-12-05T10:24:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:24:19 localhost podman[239519]: @ - - [05/Dec/2025:10:24:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:24:19 localhost podman[239519]: @ - - [05/Dec/2025:10:24:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19283 "" "Go-http-client/1.1" Dec 5 05:24:20 localhost nova_compute[280228]: 2025-12-05 10:24:20.568 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:24:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, vol_name:cephfs) < "" Dec 5 05:24:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dc0bcf67-bebb-4a88-beaa-15102c0e4d74/.meta.tmp' Dec 5 05:24:21 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dc0bcf67-bebb-4a88-beaa-15102c0e4d74/.meta.tmp' to config b'/volumes/_nogroup/dc0bcf67-bebb-4a88-beaa-15102c0e4d74/.meta' Dec 5 05:24:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, vol_name:cephfs) < "" Dec 5 05:24:21 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "format": "json"}]: dispatch Dec 5 05:24:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, vol_name:cephfs) < "" Dec 5 05:24:21 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, vol_name:cephfs) < "" Dec 5 05:24:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:24:21 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:24:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 71 KiB/s wr, 4 op/s Dec 5 05:24:23 localhost nova_compute[280228]: 2025-12-05 10:24:23.619 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s Dec 5 05:24:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "format": "json"}]: dispatch Dec 5 05:24:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:24 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:24:24.948+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dc0bcf67-bebb-4a88-beaa-15102c0e4d74' of type subvolume Dec 5 05:24:24 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dc0bcf67-bebb-4a88-beaa-15102c0e4d74' of type subvolume Dec 5 05:24:24 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dc0bcf67-bebb-4a88-beaa-15102c0e4d74", "force": true, "format": "json"}]: dispatch Dec 5 05:24:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, vol_name:cephfs) < "" Dec 5 05:24:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dc0bcf67-bebb-4a88-beaa-15102c0e4d74'' moved to trashcan Dec 5 05:24:24 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:24:24 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dc0bcf67-bebb-4a88-beaa-15102c0e4d74, vol_name:cephfs) < "" Dec 5 05:24:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:24:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:24:25 localhost podman[330412]: 2025-12-05 10:24:25.205036088 +0000 UTC m=+0.084735000 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.) Dec 5 05:24:25 localhost podman[330412]: 2025-12-05 10:24:25.247393176 +0000 UTC m=+0.127092028 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 5 05:24:25 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:24:25 localhost podman[330411]: 2025-12-05 10:24:25.300287229 +0000 UTC m=+0.183524110 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0) Dec 5 05:24:25 localhost podman[330411]: 2025-12-05 10:24:25.315798734 +0000 UTC m=+0.199035635 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:24:25 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:24:25 localhost nova_compute[280228]: 2025-12-05 10:24:25.571 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 226 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s Dec 5 05:24:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:27 localhost openstack_network_exporter[241668]: ERROR 10:24:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:24:27 localhost openstack_network_exporter[241668]: ERROR 10:24:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:24:27 localhost openstack_network_exporter[241668]: ERROR 10:24:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:24:27 localhost openstack_network_exporter[241668]: ERROR 10:24:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:24:27 localhost openstack_network_exporter[241668]: Dec 5 05:24:27 localhost openstack_network_exporter[241668]: ERROR 10:24:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:24:27 localhost openstack_network_exporter[241668]: Dec 5 05:24:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 63 KiB/s wr, 2 op/s Dec 5 05:24:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "format": "json"}]: dispatch Dec 5 05:24:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57c122f3-1783-406d-a501-cfd05f2e9a11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57c122f3-1783-406d-a501-cfd05f2e9a11, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:28 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57c122f3-1783-406d-a501-cfd05f2e9a11", "force": true, "format": "json"}]: dispatch Dec 5 05:24:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57c122f3-1783-406d-a501-cfd05f2e9a11, vol_name:cephfs) < "" Dec 5 05:24:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57c122f3-1783-406d-a501-cfd05f2e9a11'' moved to trashcan Dec 5 05:24:28 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:24:28 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57c122f3-1783-406d-a501-cfd05f2e9a11, vol_name:cephfs) < "" Dec 5 05:24:28 localhost nova_compute[280228]: 2025-12-05 10:24:28.622 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s Dec 5 05:24:30 localhost nova_compute[280228]: 2025-12-05 10:24:30.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:30 localhost nova_compute[280228]: 2025-12-05 10:24:30.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:30 localhost nova_compute[280228]: 2025-12-05 10:24:30.574 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.143 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.144 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.144 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.144 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.145 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:24:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401_ea25bb09-4f0c-4041-9779-4529a3bb81e3", "force": true, "format": "json"}]: dispatch Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401_ea25bb09-4f0c-4041-9779-4529a3bb81e3, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta' Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401_ea25bb09-4f0c-4041-9779-4529a3bb81e3, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:24:31 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "snap_name": "cbd95e9a-4741-4e3c-931e-7bcd695d1401", "force": true, "format": "json"}]: dispatch Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta.tmp' to config b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b/.meta' Dec 5 05:24:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:31 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbd95e9a-4741-4e3c-931e-7bcd695d1401, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:24:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:24:31 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/398949017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.571 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:24:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 80 KiB/s wr, 4 op/s Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.789 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.790 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.979 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.981 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11030MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.982 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:24:31 localhost nova_compute[280228]: 2025-12-05 10:24:31.982 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.216 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.217 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.217 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.352 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain.devices.0}] v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546421.localdomain}] v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 29e5776a-f12c-460e-b4f1-0186daecde24 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:24:32 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 29e5776a-f12c-460e-b4f1-0186daecde24 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:24:32 localhost ceph-mgr[286454]: [progress INFO root] Completed event 29e5776a-f12c-460e-b4f1-0186daecde24 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:24:32 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:32 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:24:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:24:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/634200232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.844 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.850 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.867 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.869 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:24:32 localhost nova_compute[280228]: 2025-12-05 10:24:32.869 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:24:33 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:33 localhost nova_compute[280228]: 2025-12-05 10:24:33.666 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s Dec 5 05:24:35 localhost nova_compute[280228]: 2025-12-05 10:24:35.578 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s Dec 5 05:24:35 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:24:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:24:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:35 localhost nova_compute[280228]: 2025-12-05 10:24:35.868 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:36 localhost nova_compute[280228]: 2025-12-05 10:24:36.503 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:36 localhost nova_compute[280228]: 2025-12-05 10:24:36.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:36 localhost nova_compute[280228]: 2025-12-05 10:24:36.506 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:24:36 localhost nova_compute[280228]: 2025-12-05 10:24:36.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:24:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:24:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:24:37 localhost systemd[1]: tmp-crun.d0ocB2.mount: Deactivated successfully. Dec 5 05:24:37 localhost podman[330578]: 2025-12-05 10:24:37.217762296 +0000 UTC m=+0.098966106 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:24:37 localhost podman[330579]: 2025-12-05 10:24:37.261913 +0000 UTC m=+0.140794719 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:24:37 localhost podman[330578]: 2025-12-05 10:24:37.282143321 +0000 UTC m=+0.163347131 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:24:37 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:24:37 localhost podman[330579]: 2025-12-05 10:24:37.322673674 +0000 UTC m=+0.201555363 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Dec 5 05:24:37 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:24:37 localhost podman[330580]: 2025-12-05 10:24:37.38058797 +0000 UTC m=+0.256484927 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3) Dec 5 05:24:37 localhost podman[330580]: 2025-12-05 10:24:37.42298108 +0000 UTC m=+0.298877987 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 5 05:24:37 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:24:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 58 KiB/s wr, 4 op/s Dec 5 05:24:38 localhost systemd[1]: tmp-crun.R3nSyX.mount: Deactivated successfully. Dec 5 05:24:38 localhost nova_compute[280228]: 2025-12-05 10:24:38.703 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8a959872-096f-4524-beb3-16ecf762162b", "format": "json"}]: dispatch Dec 5 05:24:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8a959872-096f-4524-beb3-16ecf762162b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8a959872-096f-4524-beb3-16ecf762162b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:24:38 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:24:38.816+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8a959872-096f-4524-beb3-16ecf762162b' of type subvolume Dec 5 05:24:38 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8a959872-096f-4524-beb3-16ecf762162b' of type subvolume Dec 5 05:24:38 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8a959872-096f-4524-beb3-16ecf762162b", "force": true, "format": "json"}]: dispatch Dec 5 05:24:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:24:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8a959872-096f-4524-beb3-16ecf762162b'' moved to trashcan Dec 5 05:24:38 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:24:38 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8a959872-096f-4524-beb3-16ecf762162b, vol_name:cephfs) < "" Dec 5 05:24:38 localhost nova_compute[280228]: 2025-12-05 10:24:38.893 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:24:38 localhost nova_compute[280228]: 2025-12-05 10:24:38.893 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:24:38 localhost nova_compute[280228]: 2025-12-05 10:24:38.893 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:24:38 localhost nova_compute[280228]: 2025-12-05 10:24:38.894 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:24:39 localhost nova_compute[280228]: 2025-12-05 10:24:39.302 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:24:39 localhost nova_compute[280228]: 2025-12-05 10:24:39.398 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:24:39 localhost nova_compute[280228]: 2025-12-05 10:24:39.398 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:24:39 localhost nova_compute[280228]: 2025-12-05 10:24:39.399 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:39 localhost nova_compute[280228]: 2025-12-05 10:24:39.399 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:39 localhost nova_compute[280228]: 2025-12-05 10:24:39.399 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 5 05:24:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 43 KiB/s wr, 3 op/s Dec 5 05:24:40 localhost nova_compute[280228]: 2025-12-05 10:24:40.579 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:40 localhost nova_compute[280228]: 2025-12-05 10:24:40.741 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:40 localhost nova_compute[280228]: 2025-12-05 10:24:40.741 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:40 localhost nova_compute[280228]: 2025-12-05 10:24:40.742 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:24:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 76 KiB/s wr, 4 op/s Dec 5 05:24:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e290 do_prune osdmap full prune enabled Dec 5 05:24:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e291 e291: 6 total, 6 up, 6 in Dec 5 05:24:41 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e291: 6 total, 6 up, 6 in Dec 5 05:24:42 localhost nova_compute[280228]: 2025-12-05 10:24:42.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:24:43.233 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:24:43 localhost ovn_metadata_agent[158815]: 2025-12-05 10:24:43.233 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:24:43 localhost nova_compute[280228]: 2025-12-05 10:24:43.234 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v892: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s Dec 5 05:24:43 localhost nova_compute[280228]: 2025-12-05 10:24:43.745 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:44 localhost ovn_metadata_agent[158815]: 2025-12-05 10:24:44.235 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:24:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:24:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:24:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:24:45 Dec 5 05:24:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:24:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:24:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_data', 'volumes', 'vms', 'images', '.mgr', 'backups', 'manila_metadata'] Dec 5 05:24:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:24:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:24:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:24:45 localhost podman[330639]: 2025-12-05 10:24:45.198371466 +0000 UTC m=+0.087271718 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 5 05:24:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:24:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:24:45 localhost podman[330639]: 2025-12-05 10:24:45.242873951 +0000 UTC m=+0.131774253 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 5 05:24:45 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:24:45 localhost podman[330640]: 2025-12-05 10:24:45.250235526 +0000 UTC m=+0.132509555 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:24:45 localhost podman[330640]: 2025-12-05 10:24:45.329752075 +0000 UTC m=+0.212026134 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:24:45 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:24:45 localhost nova_compute[280228]: 2025-12-05 10:24:45.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:45 localhost nova_compute[280228]: 2025-12-05 10:24:45.508 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 5 05:24:45 localhost nova_compute[280228]: 2025-12-05 10:24:45.535 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 5 05:24:45 localhost nova_compute[280228]: 2025-12-05 10:24:45.536 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:24:45 localhost nova_compute[280228]: 2025-12-05 10:24:45.583 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v893: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:24:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0024953870917085426 of space, bias 4.0, pg target 1.986328125 quantized to 16 (current 16) Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:24:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:24:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:24:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Dec 5 05:24:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:24:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:24:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v894: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s Dec 5 05:24:47 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e54: np0005546419.zhsnqq(active, since 23m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:24:48 localhost nova_compute[280228]: 2025-12-05 10:24:48.787 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v895: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s Dec 5 05:24:49 localhost podman[239519]: time="2025-12-05T10:24:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:24:49 localhost podman[239519]: @ - - [05/Dec/2025:10:24:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:24:49 localhost podman[239519]: @ - - [05/Dec/2025:10:24:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19285 "" "Go-http-client/1.1" Dec 5 05:24:50 localhost nova_compute[280228]: 2025-12-05 10:24:50.587 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e291 do_prune osdmap full prune enabled Dec 5 05:24:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 e292: 6 total, 6 up, 6 in Dec 5 05:24:51 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e292: 6 total, 6 up, 6 in Dec 5 05:24:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v897: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s Dec 5 05:24:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v898: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s Dec 5 05:24:53 localhost nova_compute[280228]: 2025-12-05 10:24:53.825 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:55 localhost nova_compute[280228]: 2025-12-05 10:24:55.589 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v899: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s wr, 1 op/s Dec 5 05:24:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:24:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:24:56 localhost podman[330688]: 2025-12-05 10:24:56.205182642 +0000 UTC m=+0.088020511 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 5 05:24:56 localhost podman[330688]: 2025-12-05 10:24:56.241640441 +0000 UTC m=+0.124478310 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 05:24:56 localhost podman[330687]: 2025-12-05 10:24:56.25661677 +0000 UTC m=+0.142268625 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true) Dec 5 05:24:56 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:24:56 localhost podman[330687]: 2025-12-05 10:24:56.269804084 +0000 UTC m=+0.155456019 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:24:56 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:24:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:24:57 localhost openstack_network_exporter[241668]: ERROR 10:24:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:24:57 localhost openstack_network_exporter[241668]: ERROR 10:24:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:24:57 localhost openstack_network_exporter[241668]: ERROR 10:24:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:24:57 localhost openstack_network_exporter[241668]: ERROR 10:24:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:24:57 localhost openstack_network_exporter[241668]: Dec 5 05:24:57 localhost openstack_network_exporter[241668]: ERROR 10:24:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:24:57 localhost openstack_network_exporter[241668]: Dec 5 05:24:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v900: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Dec 5 05:24:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:24:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:24:58 localhost nova_compute[280228]: 2025-12-05 10:24:58.867 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:24:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta.tmp' Dec 5 05:24:58 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta.tmp' to config b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta' Dec 5 05:24:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:24:58 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "format": "json"}]: dispatch Dec 5 05:24:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:24:58 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:24:58 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:24:58 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:24:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v901: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Dec 5 05:25:00 localhost nova_compute[280228]: 2025-12-05 10:25:00.592 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v902: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 15 KiB/s wr, 0 op/s Dec 5 05:25:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "snap_name": "2b579735-df10-471e-b27a-1c53c1d654a4", "format": "json"}]: dispatch Dec 5 05:25:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b579735-df10-471e-b27a-1c53c1d654a4, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b579735-df10-471e-b27a-1c53c1d654a4, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v903: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s Dec 5 05:25:03 localhost nova_compute[280228]: 2025-12-05 10:25:03.902 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:25:04.581 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:25:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:25:04.582 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:25:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:25:04.582 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:25:05 localhost nova_compute[280228]: 2025-12-05 10:25:05.594 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v904: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s Dec 5 05:25:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:25:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, vol_name:cephfs) < "" Dec 5 05:25:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6/.meta.tmp' Dec 5 05:25:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6/.meta.tmp' to config b'/volumes/_nogroup/13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6/.meta' Dec 5 05:25:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, vol_name:cephfs) < "" Dec 5 05:25:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "format": "json"}]: dispatch Dec 5 05:25:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, vol_name:cephfs) < "" Dec 5 05:25:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, vol_name:cephfs) < "" Dec 5 05:25:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:25:06 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:25:06 localhost ceph-osd[32336]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3. Dec 5 05:25:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v905: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s wr, 1 op/s Dec 5 05:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:25:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:25:08 localhost podman[330725]: 2025-12-05 10:25:08.203462047 +0000 UTC m=+0.081399688 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:25:08 localhost podman[330725]: 2025-12-05 10:25:08.212591906 +0000 UTC m=+0.090529547 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:25:08 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:25:08 localhost podman[330726]: 2025-12-05 10:25:08.258842525 +0000 UTC m=+0.132779474 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 5 05:25:08 localhost podman[330727]: 2025-12-05 10:25:08.319831775 +0000 UTC m=+0.189974787 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible) Dec 5 05:25:08 localhost podman[330727]: 2025-12-05 10:25:08.328314005 +0000 UTC m=+0.198457097 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 5 05:25:08 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:25:08 localhost podman[330726]: 2025-12-05 10:25:08.344275685 +0000 UTC m=+0.218212414 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 05:25:08 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:25:08 localhost nova_compute[280228]: 2025-12-05 10:25:08.939 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v906: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s wr, 1 op/s Dec 5 05:25:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "format": "json"}]: dispatch Dec 5 05:25:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:09 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:25:09.854+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6' of type subvolume Dec 5 05:25:09 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6' of type subvolume Dec 5 05:25:09 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6", "force": true, "format": "json"}]: dispatch Dec 5 05:25:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, vol_name:cephfs) < "" Dec 5 05:25:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6'' moved to trashcan Dec 5 05:25:09 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:25:09 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:13ee7259-aa20-40a4-9d1a-ffeb08f5d2a6, vol_name:cephfs) < "" Dec 5 05:25:10 localhost nova_compute[280228]: 2025-12-05 10:25:10.597 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v907: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s wr, 2 op/s Dec 5 05:25:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "34c024ab-9cea-4833-9880-2441e954f452", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:25:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:34c024ab-9cea-4833-9880-2441e954f452, vol_name:cephfs) < "" Dec 5 05:25:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/34c024ab-9cea-4833-9880-2441e954f452/.meta.tmp' Dec 5 05:25:13 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/34c024ab-9cea-4833-9880-2441e954f452/.meta.tmp' to config b'/volumes/_nogroup/34c024ab-9cea-4833-9880-2441e954f452/.meta' Dec 5 05:25:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:34c024ab-9cea-4833-9880-2441e954f452, vol_name:cephfs) < "" Dec 5 05:25:13 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "34c024ab-9cea-4833-9880-2441e954f452", "format": "json"}]: dispatch Dec 5 05:25:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:34c024ab-9cea-4833-9880-2441e954f452, vol_name:cephfs) < "" Dec 5 05:25:13 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:34c024ab-9cea-4833-9880-2441e954f452, vol_name:cephfs) < "" Dec 5 05:25:13 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:25:13 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:25:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v908: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s wr, 1 op/s Dec 5 05:25:13 localhost nova_compute[280228]: 2025-12-05 10:25:13.983 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:25:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:25:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:25:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:25:15 localhost nova_compute[280228]: 2025-12-05 10:25:15.600 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v909: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s wr, 1 op/s Dec 5 05:25:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:25:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:25:16 localhost systemd[1]: tmp-crun.OCi4k8.mount: Deactivated successfully. Dec 5 05:25:16 localhost podman[330784]: 2025-12-05 10:25:16.197027393 +0000 UTC m=+0.085449671 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:25:16 localhost podman[330785]: 2025-12-05 10:25:16.271439966 +0000 UTC m=+0.155435909 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:25:16 localhost podman[330784]: 2025-12-05 10:25:16.275613893 +0000 UTC m=+0.164036211 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:25:16 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:25:16 localhost podman[330785]: 2025-12-05 10:25:16.333176089 +0000 UTC m=+0.217172072 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 5 05:25:16 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:25:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "34c024ab-9cea-4833-9880-2441e954f452", "format": "json"}]: dispatch Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:34c024ab-9cea-4833-9880-2441e954f452, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:34c024ab-9cea-4833-9880-2441e954f452, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:16 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:25:16.780+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '34c024ab-9cea-4833-9880-2441e954f452' of type subvolume Dec 5 05:25:16 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '34c024ab-9cea-4833-9880-2441e954f452' of type subvolume Dec 5 05:25:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "34c024ab-9cea-4833-9880-2441e954f452", "force": true, "format": "json"}]: dispatch Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:34c024ab-9cea-4833-9880-2441e954f452, vol_name:cephfs) < "" Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/34c024ab-9cea-4833-9880-2441e954f452'' moved to trashcan Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:25:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:34c024ab-9cea-4833-9880-2441e954f452, vol_name:cephfs) < "" Dec 5 05:25:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v910: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 67 KiB/s wr, 3 op/s Dec 5 05:25:19 localhost nova_compute[280228]: 2025-12-05 10:25:19.004 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v911: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s Dec 5 05:25:19 localhost podman[239519]: time="2025-12-05T10:25:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:25:19 localhost podman[239519]: @ - - [05/Dec/2025:10:25:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:25:19 localhost podman[239519]: @ - - [05/Dec/2025:10:25:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19278 "" "Go-http-client/1.1" Dec 5 05:25:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:25:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, vol_name:cephfs) < "" Dec 5 05:25:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9928dfa4-aa49-49e2-81bd-4195ffc621e2/.meta.tmp' Dec 5 05:25:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9928dfa4-aa49-49e2-81bd-4195ffc621e2/.meta.tmp' to config b'/volumes/_nogroup/9928dfa4-aa49-49e2-81bd-4195ffc621e2/.meta' Dec 5 05:25:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, vol_name:cephfs) < "" Dec 5 05:25:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "format": "json"}]: dispatch Dec 5 05:25:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, vol_name:cephfs) < "" Dec 5 05:25:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, vol_name:cephfs) < "" Dec 5 05:25:20 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:25:20 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:25:20 localhost nova_compute[280228]: 2025-12-05 10:25:20.603 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v912: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 84 KiB/s wr, 4 op/s Dec 5 05:25:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "format": "json"}]: dispatch Dec 5 05:25:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:23 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:25:23.339+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9928dfa4-aa49-49e2-81bd-4195ffc621e2' of type subvolume Dec 5 05:25:23 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9928dfa4-aa49-49e2-81bd-4195ffc621e2' of type subvolume Dec 5 05:25:23 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9928dfa4-aa49-49e2-81bd-4195ffc621e2", "force": true, "format": "json"}]: dispatch Dec 5 05:25:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, vol_name:cephfs) < "" Dec 5 05:25:23 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9928dfa4-aa49-49e2-81bd-4195ffc621e2'' moved to trashcan Dec 5 05:25:23 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:25:23 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9928dfa4-aa49-49e2-81bd-4195ffc621e2, vol_name:cephfs) < "" Dec 5 05:25:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v913: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 48 KiB/s wr, 3 op/s Dec 5 05:25:24 localhost nova_compute[280228]: 2025-12-05 10:25:24.040 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:25 localhost nova_compute[280228]: 2025-12-05 10:25:25.606 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v914: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 48 KiB/s wr, 3 op/s Dec 5 05:25:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:25:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, vol_name:cephfs) < "" Dec 5 05:25:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/51c7d930-3a97-4fa3-ad82-6d9b7230a9bd/.meta.tmp' Dec 5 05:25:26 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/51c7d930-3a97-4fa3-ad82-6d9b7230a9bd/.meta.tmp' to config b'/volumes/_nogroup/51c7d930-3a97-4fa3-ad82-6d9b7230a9bd/.meta' Dec 5 05:25:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, vol_name:cephfs) < "" Dec 5 05:25:26 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "format": "json"}]: dispatch Dec 5 05:25:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, vol_name:cephfs) < "" Dec 5 05:25:26 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, vol_name:cephfs) < "" Dec 5 05:25:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:25:26 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:25:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:25:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:25:27 localhost openstack_network_exporter[241668]: ERROR 10:25:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:25:27 localhost openstack_network_exporter[241668]: ERROR 10:25:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:25:27 localhost openstack_network_exporter[241668]: ERROR 10:25:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:25:27 localhost openstack_network_exporter[241668]: ERROR 10:25:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:25:27 localhost openstack_network_exporter[241668]: Dec 5 05:25:27 localhost openstack_network_exporter[241668]: ERROR 10:25:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:25:27 localhost openstack_network_exporter[241668]: Dec 5 05:25:27 localhost podman[330830]: 2025-12-05 10:25:27.203784584 +0000 UTC m=+0.088540766 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 5 05:25:27 localhost podman[330830]: 2025-12-05 10:25:27.215742392 +0000 UTC m=+0.100498644 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:25:27 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:25:27 localhost podman[330831]: 2025-12-05 10:25:27.30894726 +0000 UTC m=+0.186441639 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 05:25:27 localhost podman[330831]: 2025-12-05 10:25:27.347582736 +0000 UTC m=+0.225077155 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 5 05:25:27 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:25:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v915: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 70 KiB/s wr, 4 op/s Dec 5 05:25:29 localhost nova_compute[280228]: 2025-12-05 10:25:29.076 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v916: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 3 op/s Dec 5 05:25:30 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "format": "json"}]: dispatch Dec 5 05:25:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:30 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '51c7d930-3a97-4fa3-ad82-6d9b7230a9bd' of type subvolume Dec 5 05:25:30 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:25:30.048+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '51c7d930-3a97-4fa3-ad82-6d9b7230a9bd' of type subvolume Dec 5 05:25:30 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "51c7d930-3a97-4fa3-ad82-6d9b7230a9bd", "force": true, "format": "json"}]: dispatch Dec 5 05:25:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, vol_name:cephfs) < "" Dec 5 05:25:30 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/51c7d930-3a97-4fa3-ad82-6d9b7230a9bd'' moved to trashcan Dec 5 05:25:30 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:25:30 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:51c7d930-3a97-4fa3-ad82-6d9b7230a9bd, vol_name:cephfs) < "" Dec 5 05:25:30 localhost nova_compute[280228]: 2025-12-05 10:25:30.550 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:30 localhost nova_compute[280228]: 2025-12-05 10:25:30.609 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:31 localhost nova_compute[280228]: 2025-12-05 10:25:31.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:31 localhost nova_compute[280228]: 2025-12-05 10:25:31.524 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:25:31 localhost nova_compute[280228]: 2025-12-05 10:25:31.524 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:25:31 localhost nova_compute[280228]: 2025-12-05 10:25:31.525 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:25:31 localhost nova_compute[280228]: 2025-12-05 10:25:31.525 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:25:31 localhost nova_compute[280228]: 2025-12-05 10:25:31.526 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:25:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v917: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 98 KiB/s wr, 4 op/s Dec 5 05:25:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:25:31 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1873975261' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.001 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.061 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.062 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.248 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.249 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11024MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.249 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.249 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.314 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.315 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.315 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.336 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing inventories for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.354 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating ProviderTree inventory for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.354 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Updating inventory in ProviderTree for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.370 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing aggregate associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.396 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Refreshing trait associations for resource provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3, traits: COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.432 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:25:32 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:25:32 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3746664931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.878 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.884 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.896 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.899 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:25:32 localhost nova_compute[280228]: 2025-12-05 10:25:32.899 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:25:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:25:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, vol_name:cephfs) < "" Dec 5 05:25:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c8b95898-15a4-4c97-aa64-8387a2d050a5/.meta.tmp' Dec 5 05:25:33 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c8b95898-15a4-4c97-aa64-8387a2d050a5/.meta.tmp' to config b'/volumes/_nogroup/c8b95898-15a4-4c97-aa64-8387a2d050a5/.meta' Dec 5 05:25:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, vol_name:cephfs) < "" Dec 5 05:25:33 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "format": "json"}]: dispatch Dec 5 05:25:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, vol_name:cephfs) < "" Dec 5 05:25:33 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, vol_name:cephfs) < "" Dec 5 05:25:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:25:33 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:25:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v918: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 60 KiB/s wr, 2 op/s Dec 5 05:25:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:25:33 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:25:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:25:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:25:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:25:33 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:25:33 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev f6465514-248a-4770-95a7-d3c5a48c68b8 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:25:33 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev f6465514-248a-4770-95a7-d3c5a48c68b8 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:25:33 localhost ceph-mgr[286454]: [progress INFO root] Completed event f6465514-248a-4770-95a7-d3c5a48c68b8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:25:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:25:33 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:25:34 localhost nova_compute[280228]: 2025-12-05 10:25:34.116 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:34 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:25:34 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:25:35 localhost nova_compute[280228]: 2025-12-05 10:25:35.611 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v919: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 60 KiB/s wr, 2 op/s Dec 5 05:25:35 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:25:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:25:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:25:35 localhost nova_compute[280228]: 2025-12-05 10:25:35.899 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.503 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.532 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.533 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.533 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:25:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:25:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "format": "json"}]: dispatch Dec 5 05:25:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:36 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:25:36.859+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c8b95898-15a4-4c97-aa64-8387a2d050a5' of type subvolume Dec 5 05:25:36 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c8b95898-15a4-4c97-aa64-8387a2d050a5' of type subvolume Dec 5 05:25:36 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c8b95898-15a4-4c97-aa64-8387a2d050a5", "force": true, "format": "json"}]: dispatch Dec 5 05:25:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, vol_name:cephfs) < "" Dec 5 05:25:36 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c8b95898-15a4-4c97-aa64-8387a2d050a5'' moved to trashcan Dec 5 05:25:36 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:25:36 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c8b95898-15a4-4c97-aa64-8387a2d050a5, vol_name:cephfs) < "" Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.959 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.960 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.960 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:25:36 localhost nova_compute[280228]: 2025-12-05 10:25:36.961 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:25:37 localhost nova_compute[280228]: 2025-12-05 10:25:37.385 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:25:37 localhost nova_compute[280228]: 2025-12-05 10:25:37.404 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:25:37 localhost nova_compute[280228]: 2025-12-05 10:25:37.404 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:25:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v920: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 69 KiB/s wr, 3 op/s Dec 5 05:25:38 localhost nova_compute[280228]: 2025-12-05 10:25:38.403 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:25:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:25:39 localhost nova_compute[280228]: 2025-12-05 10:25:39.149 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:39 localhost podman[330998]: 2025-12-05 10:25:39.204739001 +0000 UTC m=+0.089234408 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:25:39 localhost systemd[1]: tmp-crun.e1uaf7.mount: Deactivated successfully. Dec 5 05:25:39 localhost podman[330999]: 2025-12-05 10:25:39.271646573 +0000 UTC m=+0.154910702 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:25:39 localhost podman[330999]: 2025-12-05 10:25:39.281611079 +0000 UTC m=+0.164875248 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:25:39 localhost podman[330998]: 2025-12-05 10:25:39.290470581 +0000 UTC m=+0.174965988 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:25:39 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:25:39 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:25:39 localhost podman[331000]: 2025-12-05 10:25:39.373500827 +0000 UTC m=+0.248866263 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 5 05:25:39 localhost podman[331000]: 2025-12-05 10:25:39.38175031 +0000 UTC m=+0.257115736 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm) Dec 5 05:25:39 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:25:39 localhost nova_compute[280228]: 2025-12-05 10:25:39.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v921: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 48 KiB/s wr, 2 op/s Dec 5 05:25:40 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "snap_name": "2b579735-df10-471e-b27a-1c53c1d654a4_3bb1d1a9-1164-4b5f-8e39-c63568c7ef8a", "force": true, "format": "json"}]: dispatch Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b579735-df10-471e-b27a-1c53c1d654a4_3bb1d1a9-1164-4b5f-8e39-c63568c7ef8a, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta.tmp' Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta.tmp' to config b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta' Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b579735-df10-471e-b27a-1c53c1d654a4_3bb1d1a9-1164-4b5f-8e39-c63568c7ef8a, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:40 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "snap_name": "2b579735-df10-471e-b27a-1c53c1d654a4", "force": true, "format": "json"}]: dispatch Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b579735-df10-471e-b27a-1c53c1d654a4, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta.tmp' Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta.tmp' to config b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a/.meta' Dec 5 05:25:40 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b579735-df10-471e-b27a-1c53c1d654a4, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:40 localhost nova_compute[280228]: 2025-12-05 10:25:40.613 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:41 localhost nova_compute[280228]: 2025-12-05 10:25:41.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:41 localhost nova_compute[280228]: 2025-12-05 10:25:41.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:41 localhost nova_compute[280228]: 2025-12-05 10:25:41.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:25:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v922: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 94 KiB/s wr, 5 op/s Dec 5 05:25:42 localhost nova_compute[280228]: 2025-12-05 10:25:42.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:25:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v923: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 55 KiB/s wr, 3 op/s Dec 5 05:25:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "format": "json"}]: dispatch Dec 5 05:25:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:25:44 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:25:44.097+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f0594c6-2cd4-4a2a-8b79-49233c58923a' of type subvolume Dec 5 05:25:44 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f0594c6-2cd4-4a2a-8b79-49233c58923a' of type subvolume Dec 5 05:25:44 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f0594c6-2cd4-4a2a-8b79-49233c58923a", "force": true, "format": "json"}]: dispatch Dec 5 05:25:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7f0594c6-2cd4-4a2a-8b79-49233c58923a'' moved to trashcan Dec 5 05:25:44 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:25:44 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f0594c6-2cd4-4a2a-8b79-49233c58923a, vol_name:cephfs) < "" Dec 5 05:25:44 localhost nova_compute[280228]: 2025-12-05 10:25:44.168 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:25:45 Dec 5 05:25:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:25:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:25:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'manila_metadata', 'backups', 'manila_data', '.mgr'] Dec 5 05:25:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:25:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:25:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:25:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:25:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:25:45 localhost nova_compute[280228]: 2025-12-05 10:25:45.617 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v924: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 55 KiB/s wr, 3 op/s Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:25:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0026914084310441094 of space, bias 4.0, pg target 2.142361111111111 quantized to 16 (current 16) Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:25:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:25:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:25:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:25:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e292 do_prune osdmap full prune enabled Dec 5 05:25:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e293 e293: 6 total, 6 up, 6 in Dec 5 05:25:46 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e293: 6 total, 6 up, 6 in Dec 5 05:25:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:25:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:25:47 localhost podman[331056]: 2025-12-05 10:25:47.198209436 +0000 UTC m=+0.088850967 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible) Dec 5 05:25:47 localhost podman[331056]: 2025-12-05 10:25:47.281600943 +0000 UTC m=+0.172242524 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 5 05:25:47 localhost systemd[1]: tmp-crun.wJAOcH.mount: Deactivated successfully. Dec 5 05:25:47 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:25:47 localhost podman[331057]: 2025-12-05 10:25:47.30300528 +0000 UTC m=+0.191933328 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:25:47 localhost podman[331057]: 2025-12-05 10:25:47.312597963 +0000 UTC m=+0.201526011 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:25:47 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:25:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v926: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 78 KiB/s wr, 4 op/s Dec 5 05:25:49 localhost nova_compute[280228]: 2025-12-05 10:25:49.204 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v927: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 78 KiB/s wr, 4 op/s Dec 5 05:25:49 localhost ovn_metadata_agent[158815]: 2025-12-05 10:25:49.864 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:25:49 localhost nova_compute[280228]: 2025-12-05 10:25:49.865 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:49 localhost podman[239519]: time="2025-12-05T10:25:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:25:49 localhost ovn_metadata_agent[158815]: 2025-12-05 10:25:49.867 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:25:49 localhost podman[239519]: @ - - [05/Dec/2025:10:25:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:25:49 localhost podman[239519]: @ - - [05/Dec/2025:10:25:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19280 "" "Go-http-client/1.1" Dec 5 05:25:50 localhost nova_compute[280228]: 2025-12-05 10:25:50.621 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v928: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 47 KiB/s wr, 3 op/s Dec 5 05:25:52 localhost ovn_metadata_agent[158815]: 2025-12-05 10:25:52.870 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:25:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v929: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 47 KiB/s wr, 3 op/s Dec 5 05:25:54 localhost nova_compute[280228]: 2025-12-05 10:25:54.248 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:55 localhost nova_compute[280228]: 2025-12-05 10:25:55.624 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v930: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 47 KiB/s wr, 3 op/s Dec 5 05:25:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:25:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:25:56 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta.tmp' Dec 5 05:25:56 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta.tmp' to config b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta' Dec 5 05:25:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:25:56 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "format": "json"}]: dispatch Dec 5 05:25:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:25:56 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:25:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:25:56 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:25:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:25:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e293 do_prune osdmap full prune enabled Dec 5 05:25:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e294 e294: 6 total, 6 up, 6 in Dec 5 05:25:56 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e294: 6 total, 6 up, 6 in Dec 5 05:25:57 localhost openstack_network_exporter[241668]: ERROR 10:25:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:25:57 localhost openstack_network_exporter[241668]: ERROR 10:25:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:25:57 localhost openstack_network_exporter[241668]: ERROR 10:25:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:25:57 localhost openstack_network_exporter[241668]: ERROR 10:25:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:25:57 localhost openstack_network_exporter[241668]: Dec 5 05:25:57 localhost openstack_network_exporter[241668]: ERROR 10:25:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:25:57 localhost openstack_network_exporter[241668]: Dec 5 05:25:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v932: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 35 KiB/s wr, 2 op/s Dec 5 05:25:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:25:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:25:58 localhost podman[331103]: 2025-12-05 10:25:58.203397832 +0000 UTC m=+0.084816592 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 5 05:25:58 localhost podman[331103]: 2025-12-05 10:25:58.21866208 +0000 UTC m=+0.100080830 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 5 05:25:58 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:25:58 localhost systemd[1]: tmp-crun.o1MKtT.mount: Deactivated successfully. Dec 5 05:25:58 localhost podman[331104]: 2025-12-05 10:25:58.310328581 +0000 UTC m=+0.187314726 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9) Dec 5 05:25:58 localhost podman[331104]: 2025-12-05 10:25:58.348684518 +0000 UTC m=+0.225670683 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 5 05:25:58 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:25:59 localhost nova_compute[280228]: 2025-12-05 10:25:59.285 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:25:59 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "snap_name": "eb5d7ece-ad76-479c-8c09-47bce11dd7a1", "format": "json"}]: dispatch Dec 5 05:25:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb5d7ece-ad76-479c-8c09-47bce11dd7a1, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:25:59 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb5d7ece-ad76-479c-8c09-47bce11dd7a1, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:25:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v933: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 35 KiB/s wr, 2 op/s Dec 5 05:26:00 localhost nova_compute[280228]: 2025-12-05 10:26:00.627 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v934: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s Dec 5 05:26:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "snap_name": "eb5d7ece-ad76-479c-8c09-47bce11dd7a1_70a4bdba-2abe-427a-b144-b7d84bdf25a4", "force": true, "format": "json"}]: dispatch Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb5d7ece-ad76-479c-8c09-47bce11dd7a1_70a4bdba-2abe-427a-b144-b7d84bdf25a4, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta.tmp' Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta.tmp' to config b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta' Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb5d7ece-ad76-479c-8c09-47bce11dd7a1_70a4bdba-2abe-427a-b144-b7d84bdf25a4, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:26:02 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "snap_name": "eb5d7ece-ad76-479c-8c09-47bce11dd7a1", "force": true, "format": "json"}]: dispatch Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb5d7ece-ad76-479c-8c09-47bce11dd7a1, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta.tmp' Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta.tmp' to config b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e/.meta' Dec 5 05:26:02 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb5d7ece-ad76-479c-8c09-47bce11dd7a1, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:26:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:26:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4003524543' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:26:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:26:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4003524543' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:26:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v935: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s Dec 5 05:26:04 localhost nova_compute[280228]: 2025-12-05 10:26:04.331 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:04.582 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:26:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:04.582 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:26:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:04.583 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:26:05 localhost nova_compute[280228]: 2025-12-05 10:26:05.630 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v936: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s wr, 1 op/s Dec 5 05:26:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "format": "json"}]: dispatch Dec 5 05:26:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:26:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:26:06 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:26:06.037+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4e488967-e5b1-41c0-8ac2-92a447a23b8e' of type subvolume Dec 5 05:26:06 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4e488967-e5b1-41c0-8ac2-92a447a23b8e' of type subvolume Dec 5 05:26:06 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4e488967-e5b1-41c0-8ac2-92a447a23b8e", "force": true, "format": "json"}]: dispatch Dec 5 05:26:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:26:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4e488967-e5b1-41c0-8ac2-92a447a23b8e'' moved to trashcan Dec 5 05:26:06 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:26:06 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4e488967-e5b1-41c0-8ac2-92a447a23b8e, vol_name:cephfs) < "" Dec 5 05:26:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e294 do_prune osdmap full prune enabled Dec 5 05:26:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e295 e295: 6 total, 6 up, 6 in Dec 5 05:26:06 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e295: 6 total, 6 up, 6 in Dec 5 05:26:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v938: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 64 KiB/s wr, 3 op/s Dec 5 05:26:09 localhost nova_compute[280228]: 2025-12-05 10:26:09.364 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v939: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 64 KiB/s wr, 3 op/s Dec 5 05:26:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 5 05:26:09 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 7447 writes, 45K keys, 7447 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.04 MB/s#012Cumulative WAL: 7447 writes, 7447 syncs, 1.00 writes per sync, written: 0.06 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2282 writes, 10K keys, 2282 commit groups, 1.0 writes per commit group, ingest: 9.83 MB, 0.02 MB/s#012Interval WAL: 2282 writes, 2282 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.1 0.1 0.0 1.0 0.0 131.5 0.39 0.15 24 0.016 0 0 0.0 0.0#012 L6 1/0 17.70 MB 0.0 0.4 0.1 0.4 0.4 0.0 0.0 7.7 168.4 155.1 2.53 1.03 23 0.110 297K 11K 0.0 0.0#012 Sum 1/0 17.70 MB 0.0 0.4 0.1 0.4 0.4 0.1 0.0 8.7 145.9 151.9 2.92 1.18 47 0.062 297K 11K 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 14.7 161.4 163.5 0.82 0.37 14 0.059 102K 3762 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.4 0.1 0.4 0.4 0.0 0.0 0.0 168.4 155.1 2.53 1.03 23 0.110 297K 11K 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.1 0.1 0.0 0.0 0.0 132.3 0.39 0.15 23 0.017 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.43 GB write, 0.25 MB/s write, 0.42 GB read, 0.24 MB/s read, 2.9 seconds#012Interval compaction: 0.13 GB write, 0.22 MB/s write, 0.13 GB read, 0.22 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56443b711350#2 capacity: 304.00 MB usage: 50.28 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000449 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3251,48.40 MB,15.9194%) FilterBlock(47,844.67 KB,0.27134%) IndexBlock(47,1.06 MB,0.348307%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 5 05:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:26:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:26:10 localhost podman[331143]: 2025-12-05 10:26:10.220892947 +0000 UTC m=+0.095326095 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 5 05:26:10 localhost podman[331143]: 2025-12-05 10:26:10.226063105 +0000 UTC m=+0.100496253 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:26:10 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:26:10 localhost podman[331144]: 2025-12-05 10:26:10.286736126 +0000 UTC m=+0.153169299 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:26:10 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Dec 5 05:26:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:10 localhost podman[331144]: 2025-12-05 10:26:10.326814575 +0000 UTC m=+0.193247778 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2) Dec 5 05:26:10 localhost systemd[1]: tmp-crun.HM3JgN.mount: Deactivated successfully. Dec 5 05:26:10 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:26:10 localhost podman[331142]: 2025-12-05 10:26:10.331141538 +0000 UTC m=+0.209574069 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:26:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta.tmp' Dec 5 05:26:10 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta.tmp' to config b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta' Dec 5 05:26:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:10 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "format": "json"}]: dispatch Dec 5 05:26:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:10 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:10 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 5 05:26:10 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.15726 172.18.0.34:0/300864662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 5 05:26:10 localhost podman[331142]: 2025-12-05 10:26:10.415814995 +0000 UTC m=+0.294247566 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:26:10 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:26:10 localhost nova_compute[280228]: 2025-12-05 10:26:10.633 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v940: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s Dec 5 05:26:12 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "snap_name": "b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5", "format": "json"}]: dispatch Dec 5 05:26:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:12 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:12.958 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'name': 'test', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005546419.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'hostId': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 5 05:26:12 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:12.958 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.000 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.002 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd250c92b-7660-4215-863f-d7392c70eebc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:12.959182', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2dd93ba-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'eb58baaeeefe77d3be406068334ea31e3d4f40ffe1b8fac73b6f6087758c4dd9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:12.959182', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2ddaf08-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': '0df1aed5c74d029aa2cd89cd6374717d13bc87161125c244be08ef255f49aad1'}]}, 'timestamp': '2025-12-05 10:26:13.002683', '_unique_id': '76226bcabee94a3e8a6ef5843ceb09f3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.005 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.006 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.006 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 3720587262 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.007 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.latency volume: 23909565 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9ddf0870-488a-4c75-a369-7ca836ff62a1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3720587262, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.006880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2de67d6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'c84f28e6251b0e6b8562cf0929d6fb427d40ec55da61459386c89d5654054dd7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 23909565, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.006880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2de7d84-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': '31848baa2d5406dbbdf2256b6f633776d5903db8de27196c7b4bbf8618891859'}]}, 'timestamp': '2025-12-05 10:26:13.007940', '_unique_id': 'ef525325614349468ebc2181ec67934a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.009 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.010 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.010 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 397312 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.011 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '90840248-076c-4965-b050-2166f7a8410a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 397312, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.010517', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2def5ac-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'cb5fb7ab6675f881d2216b9d498917bfbcc305bd90277cd7ed70bf108751ec2b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.010517', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2df0e0c-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'e0f1d107c3e5358dae8c04ecad1fc41df79e25b8159c245145096f2f8b6b5fb6'}]}, 'timestamp': '2025-12-05 10:26:13.011644', '_unique_id': '9f45a4ca4f944b2e885aac180d624699'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.012 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.014 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.020 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '72da1a25-f505-4da2-ba3d-e8f39047a20f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.014630', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e083cc-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '77060502511b91964973d01dae378e9164b44f914ccd8f3763b78495c7ddc63a'}]}, 'timestamp': '2025-12-05 10:26:13.021326', '_unique_id': 'e8c91f8403ec4e5096f3d0d7816d9887'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.022 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.023 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.023 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '830363ad-1b24-4c29-9bf8-aca49047b287', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.023822', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e10072-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '3d46f132a88189553a4825c140c71e55d20c585e16bfa0777eb1b61438398e43'}]}, 'timestamp': '2025-12-05 10:26:13.024495', '_unique_id': '327aa9f4a1074e76868165201dc29acd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.025 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.026 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.027 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b9df7d30-4441-436d-9ee1-00a6e942a763', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.026970', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e17c96-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '46978196c10fc2ba353194941a1e9b76a6373a0dcc2f0f55b8c962ebfb53c937'}]}, 'timestamp': '2025-12-05 10:26:13.027634', '_unique_id': '69023f45564d44f086bab223f0d4d4b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.028 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.030 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '39022b84-931e-4992-b8e0-7d0b003224b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.030378', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e1ff54-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '35a7d090ac4458b0f20f2a121fe99159dac88086c9b4bf0fdaf31dfb61179489'}]}, 'timestamp': '2025-12-05 10:26:13.030942', '_unique_id': '2f86e167192c4ef1a0078eaaf3e67a75'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.031 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.033 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f2d87d0-43c3-4b45-a42f-3c3bf2155b74', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.033467', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e27916-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': 'b7bfab4f72a8ff3637184697954313d49b57165350a7951da1054d2903e30c7b'}]}, 'timestamp': '2025-12-05 10:26:13.034122', '_unique_id': '4df9a829bf33489aa73b717327299d9a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.035 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.036 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.036 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.047 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '81214c07-bd90-4b58-b0f3-665aad212fc3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.036869', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2e492f0-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.2112016, 'message_signature': '33fae54b90baef27b219cc1d2b15bd5c0b299021910403796cec8b4a96c1b159'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.036869', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2e4a51a-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.2112016, 'message_signature': 'db0ff34bff7fc874a45338b4a85ca7bcfa119c0c5ecc833b5d645550dc9fc98a'}]}, 'timestamp': '2025-12-05 10:26:13.048282', '_unique_id': 'b45f9acbe4c6442690db6cee364b45ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.049 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.050 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.065 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/cpu volume: 22140000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f4878f51-e66a-4590-9a80-a048d19c7660', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 22140000000, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:26:13.050913', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'd2e76b56-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.240028264, 'message_signature': '06937a5c2bf16f8d80a312f87318f20367eacf127048f606b6a5d96a7f15ffe1'}]}, 'timestamp': '2025-12-05 10:26:13.066554', '_unique_id': '40c28c2721c5403f9a60f62b1a4f8a97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.067 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.069 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.069 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9bfe1b1c-3fc2-4842-a889-5be71cb21eca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.069415', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e7f666-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': 'b9f294e70818656f07090c699ae10f88fe00c4dec5446c6f221186dd73ef2f1d'}]}, 'timestamp': '2025-12-05 10:26:13.070119', '_unique_id': 'ba514692181c46aa851eb20b3b3cd0de'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.071 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.072 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.072 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.072 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c5097949-26d2-4e27-a5ca-5ac8b36624b7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.072920', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2e881c6-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '7e76c91ac0e43633238ad5d83f3690b9b9c4920eadc1eb783b994696738bf453'}]}, 'timestamp': '2025-12-05 10:26:13.073621', '_unique_id': '60a5190e902147039f734cab20ef42ed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.074 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.075 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.076 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '09318f94-7f1c-451f-b536-a50a12f496ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.076087', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2e8fbc4-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': '943b32175f7fc7bfc64b9c04c86fd1b4dd741900996be411a8bb14d5ffc54657'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.076087', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2e91122-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'cf4827a1b2be622a1c0d689dc749bb5aa4e6cfb5cb06a862ad32a8cba5f3c598'}]}, 'timestamp': '2025-12-05 10:26:13.077335', '_unique_id': 'a215e6eaf345495ba7d3aa07b1b93cfd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.078 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.080 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.080 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 1657873269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.080 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.latency volume: 112924751 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd255c905-1b33-4222-b5b9-d63c7e7cd574', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1657873269, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.080451', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2e9a240-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': '264afd5fc53a6be430d9bf05957a9c1aad1d82d929f4f70ec416db006eed8b78'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 112924751, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.080451', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2e9b55a-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': '32fdc51c777f716a69d4fa8c42e3a801a52eb065127ddff262f50ea2c433aa68'}]}, 'timestamp': '2025-12-05 10:26:13.081480', '_unique_id': '90147dac2931498495e4757a3304e48a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.082 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.084 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.084 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '646f425f-cf0a-4d19-9c78-88e43793c5a7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.084483', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2ea3f34-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': 'b0c91ed6088ffa85b4006d6697df84bddf0d5405296ee882e6e40022b6004e03'}]}, 'timestamp': '2025-12-05 10:26:13.085011', '_unique_id': '2f1244d081cd49d091bf756219284f61'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.086 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.087 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.087 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '571cd54d-4a50-4801-a2d3-41e82544016e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.087454', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2eab32e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '8597575657d4e2eee401e497a9c8a195f4d9bbcd0781d11d4b91fc83bc0297a0'}]}, 'timestamp': '2025-12-05 10:26:13.088055', '_unique_id': '97757ee42f7c47b8808564bb893eb1f5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.089 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.090 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.090 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.091 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3bd628b9-e917-4b00-bbc8-3307849c0b12', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.090529', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2eb2a34-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.2112016, 'message_signature': '938f56f7197151c79f4b02e5dd008da3c2b7f6571eb2a4fb02162795a78ab4be'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.090529', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2eb42a8-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.2112016, 'message_signature': '5879f6c8195acf0f6d88115643e1ccd0297225fcb84c2ef1d20ae7b081354fc3'}]}, 'timestamp': '2025-12-05 10:26:13.091632', '_unique_id': '05d655379c344d5c9edc5a813b64ee1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.092 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.093 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.093 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7fe36473-9b5a-41e9-9a08-586055cdf5d4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': 'instance-00000002-96a47a1c-57c7-4bb1-aecc-33db976db8c7-tapc2f95d81-23', 'timestamp': '2025-12-05T10:26:13.093840', 'resource_metadata': {'display_name': 'test', 'name': 'tapc2f95d81-23', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:04:e6:3a', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc2f95d81-23'}, 'message_id': 'd2eba7f2-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.18903475, 'message_signature': '49e039e230237aff8aedcec53e8d4fc387d1fa6da2c5ac7533fb0fe43aef66a7'}]}, 'timestamp': '2025-12-05 10:26:13.094158', '_unique_id': 'c742771a9ca24a29b4ec6554adac568f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.094 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.095 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.095 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.095 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b3a4bb48-38ce-4ff3-8fa3-808d480ce173', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.095596', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2ebec62-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.2112016, 'message_signature': '20efffccee1cdab090531b0a5f4346eb67266bb71b885de615dc113b3085f140'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.095596', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2ebf64e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.2112016, 'message_signature': 'a44440086bfc29553d39fa85f7684a3afcc01b4a1246831ed99a999f0be6097e'}]}, 'timestamp': '2025-12-05 10:26:13.096128', '_unique_id': 'db2c144ffe7447f4bd6a9b5ca073f63d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.096 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.097 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.097 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.097 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.097 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5f111250-bafd-4aae-93dd-62abdbcd65c6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vda', 'timestamp': '2025-12-05T10:26:13.097670', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd2ec3d3e-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'b863d38462afd0a596b47b5a01f431049ad832ffc6fee181233ba09e38596af6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7-vdb', 'timestamp': '2025-12-05T10:26:13.097670', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd2ec4720-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.133521597, 'message_signature': 'cd4747166acf60f30b4e0f86414e8ffa267ea778ec729a622d0137db3711a9b0'}]}, 'timestamp': '2025-12-05 10:26:13.098209', '_unique_id': '27b7d4b071844ba18e1df4e8072fb225'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.098 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.099 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.099 12 DEBUG ceilometer.compute.pollsters [-] 96a47a1c-57c7-4bb1-aecc-33db976db8c7/memory.usage volume: 51.7421875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f07ce69d-fe57-4819-a144-5a245f29f71b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.7421875, 'user_id': '52d0a54dc45b4c4caaba721ba3202150', 'user_name': None, 'project_id': 'e6ca8a92050741d3a93772e6c1b0d704', 'project_name': None, 'resource_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'timestamp': '2025-12-05T10:26:13.099661', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': '96a47a1c-57c7-4bb1-aecc-33db976db8c7', 'instance_type': 'm1.small', 'host': 'f7168875000626c29378060d138de6cd758fe4c0826d6cbcca291624', 'instance_host': 'np0005546419.localdomain', 'flavor': {'id': 'bb6181df-1ada-42c2-81f6-896f08302073', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2'}, 'image_ref': 'e7469c27-9043-4bd0-b0a4-5b489dcf3ae2', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'd2ec8b36-d1c4-11f0-8ba6-fa163e982365', 'monotonic_time': 13488.240028264, 'message_signature': '6c91c5c9d1518f27c3e775331866715db73e1c98d210b57330321568927ae212'}]}, 'timestamp': '2025-12-05 10:26:13.099942', '_unique_id': '9cc59290163e4e538afe350ef7718acf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging yield Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 5 05:26:13 localhost ceilometer_agent_compute[236891]: 2025-12-05 10:26:13.100 12 ERROR oslo_messaging.notify.messaging Dec 5 05:26:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v941: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s Dec 5 05:26:14 localhost nova_compute[280228]: 2025-12-05 10:26:14.411 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:26:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:26:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:26:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:26:15 localhost nova_compute[280228]: 2025-12-05 10:26:15.636 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v942: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 4 op/s Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:26:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e295 do_prune osdmap full prune enabled Dec 5 05:26:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e296 e296: 6 total, 6 up, 6 in Dec 5 05:26:16 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e296: 6 total, 6 up, 6 in Dec 5 05:26:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "snap_name": "b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5_ce45a9ec-91dc-4f53-8a37-a14a18566ce2", "force": true, "format": "json"}]: dispatch Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5_ce45a9ec-91dc-4f53-8a37-a14a18566ce2, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta.tmp' Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta.tmp' to config b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta' Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5_ce45a9ec-91dc-4f53-8a37-a14a18566ce2, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:16 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "snap_name": "b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5", "force": true, "format": "json"}]: dispatch Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta.tmp' Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta.tmp' to config b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26/.meta' Dec 5 05:26:16 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b97a4514-c5c4-47c9-a0fb-6f5cbd85fac5, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v944: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 48 KiB/s wr, 3 op/s Dec 5 05:26:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:26:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:26:18 localhost podman[331201]: 2025-12-05 10:26:18.190948435 +0000 UTC m=+0.070753381 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:26:18 localhost systemd[1]: tmp-crun.tNzDd1.mount: Deactivated successfully. Dec 5 05:26:18 localhost podman[331200]: 2025-12-05 10:26:18.225547116 +0000 UTC m=+0.105358232 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:26:18 localhost podman[331201]: 2025-12-05 10:26:18.274697814 +0000 UTC m=+0.154502730 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 5 05:26:18 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:26:18 localhost podman[331200]: 2025-12-05 10:26:18.290664054 +0000 UTC m=+0.170475160 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 5 05:26:18 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:26:19 localhost nova_compute[280228]: 2025-12-05 10:26:19.443 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v945: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 48 KiB/s wr, 3 op/s Dec 5 05:26:19 localhost podman[239519]: time="2025-12-05T10:26:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:26:19 localhost podman[239519]: @ - - [05/Dec/2025:10:26:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:26:19 localhost podman[239519]: @ - - [05/Dec/2025:10:26:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19282 "" "Go-http-client/1.1" Dec 5 05:26:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "format": "json"}]: dispatch Dec 5 05:26:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:26:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, format:json, prefix:fs clone status, vol_name:cephfs) < "" Dec 5 05:26:20 localhost ceph-79feddb1-4bfc-557f-83b9-0d57c9f66c1b-mgr-np0005546419-zhsnqq[286450]: 2025-12-05T10:26:20.120+0000 7f996f03a640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86a9fc66-b92c-4138-9ee0-4dc920520e26' of type subvolume Dec 5 05:26:20 localhost ceph-mgr[286454]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86a9fc66-b92c-4138-9ee0-4dc920520e26' of type subvolume Dec 5 05:26:20 localhost ceph-mgr[286454]: log_channel(audit) log [DBG] : from='client.15726 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86a9fc66-b92c-4138-9ee0-4dc920520e26", "force": true, "format": "json"}]: dispatch Dec 5 05:26:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86a9fc66-b92c-4138-9ee0-4dc920520e26'' moved to trashcan Dec 5 05:26:20 localhost ceph-mgr[286454]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 5 05:26:20 localhost ceph-mgr[286454]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86a9fc66-b92c-4138-9ee0-4dc920520e26, vol_name:cephfs) < "" Dec 5 05:26:20 localhost nova_compute[280228]: 2025-12-05 10:26:20.639 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v946: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 79 KiB/s wr, 3 op/s Dec 5 05:26:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e296 do_prune osdmap full prune enabled Dec 5 05:26:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e297 e297: 6 total, 6 up, 6 in Dec 5 05:26:21 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e297: 6 total, 6 up, 6 in Dec 5 05:26:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v948: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 99 KiB/s wr, 4 op/s Dec 5 05:26:24 localhost nova_compute[280228]: 2025-12-05 10:26:24.491 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:25 localhost nova_compute[280228]: 2025-12-05 10:26:25.642 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v949: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 224 B/s rd, 50 KiB/s wr, 2 op/s Dec 5 05:26:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:27 localhost openstack_network_exporter[241668]: ERROR 10:26:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:26:27 localhost openstack_network_exporter[241668]: ERROR 10:26:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:26:27 localhost openstack_network_exporter[241668]: ERROR 10:26:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:26:27 localhost openstack_network_exporter[241668]: ERROR 10:26:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:26:27 localhost openstack_network_exporter[241668]: Dec 5 05:26:27 localhost openstack_network_exporter[241668]: ERROR 10:26:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:26:27 localhost openstack_network_exporter[241668]: Dec 5 05:26:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v950: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 3 op/s Dec 5 05:26:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:26:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:26:29 localhost systemd[1]: tmp-crun.sgxxRe.mount: Deactivated successfully. Dec 5 05:26:29 localhost podman[331250]: 2025-12-05 10:26:29.225616793 +0000 UTC m=+0.095689036 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter) Dec 5 05:26:29 localhost podman[331249]: 2025-12-05 10:26:29.26598452 +0000 UTC m=+0.138115447 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible) Dec 5 05:26:29 localhost podman[331249]: 2025-12-05 10:26:29.27769727 +0000 UTC m=+0.149828187 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 5 05:26:29 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:26:29 localhost podman[331250]: 2025-12-05 10:26:29.293775943 +0000 UTC m=+0.163848186 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 5 05:26:29 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:26:29 localhost nova_compute[280228]: 2025-12-05 10:26:29.516 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v951: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 3 op/s Dec 5 05:26:30 localhost nova_compute[280228]: 2025-12-05 10:26:30.644 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e297 do_prune osdmap full prune enabled Dec 5 05:26:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 e298: 6 total, 6 up, 6 in Dec 5 05:26:31 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e298: 6 total, 6 up, 6 in Dec 5 05:26:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v953: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 206 B/s rd, 27 KiB/s wr, 1 op/s Dec 5 05:26:32 localhost nova_compute[280228]: 2025-12-05 10:26:32.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.563 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.563 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.564 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.564 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.565 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:26:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v954: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s Dec 5 05:26:33 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:26:33 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1052562904' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:26:33 localhost nova_compute[280228]: 2025-12-05 10:26:33.972 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.106 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.107 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.326 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.329 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11010MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.329 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.330 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.559 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.947 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.948 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.948 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:26:34 localhost nova_compute[280228]: 2025-12-05 10:26:34.996 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:26:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:26:35 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:26:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:26:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:26:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:26:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:26:35 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev 50d74862-17ef-4a56-8238-434cfeca76fb (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:26:35 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev 50d74862-17ef-4a56-8238-434cfeca76fb (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:26:35 localhost ceph-mgr[286454]: [progress INFO root] Completed event 50d74862-17ef-4a56-8238-434cfeca76fb (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:26:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:26:35 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:26:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:26:35 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1394077181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.454 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.462 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.647 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.699 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.702 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.702 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:26:35 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:35.708 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '22:2e:e6', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'be:c4:19:82:f8:46'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:26:35 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:35.709 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 5 05:26:35 localhost nova_compute[280228]: 2025-12-05 10:26:35.710 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v955: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 26 KiB/s wr, 1 op/s Dec 5 05:26:35 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:26:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:26:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:26:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:26:35 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:26:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0. Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.613811) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396613889, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2096, "num_deletes": 255, "total_data_size": 2258223, "memory_usage": 2302816, "flush_reason": "Manual Compaction"} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396627292, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 2193201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44140, "largest_seqno": 46235, "table_properties": {"data_size": 2184747, "index_size": 5087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18685, "raw_average_key_size": 20, "raw_value_size": 2167019, "raw_average_value_size": 2352, "num_data_blocks": 221, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930241, "oldest_key_time": 1764930241, "file_creation_time": 1764930396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 13549 microseconds, and 6187 cpu microseconds. Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.627359) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 2193201 bytes OK Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.627394) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.629280) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.629304) EVENT_LOG_v1 {"time_micros": 1764930396629297, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.629328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2249308, prev total WAL file size 2249308, number of live WAL files 2. Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.630158) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353330' seq:72057594037927935, type:22 .. '6B760031373832' seq:0, type:0; will stop at (end) Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(2141KB)], [81(17MB)] Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396630231, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 20751853, "oldest_snapshot_seqno": -1} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 14907 keys, 19651331 bytes, temperature: kUnknown Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396738395, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 19651331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19564335, "index_size": 48623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 37317, "raw_key_size": 398540, "raw_average_key_size": 26, "raw_value_size": 19309522, "raw_average_value_size": 1295, "num_data_blocks": 1808, "num_entries": 14907, "num_filter_entries": 14907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764930396, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.738822) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 19651331 bytes Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.740768) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.7 rd, 181.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 17.7 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(18.4) write-amplify(9.0) OK, records in: 15442, records dropped: 535 output_compression: NoCompression Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.740795) EVENT_LOG_v1 {"time_micros": 1764930396740783, "job": 50, "event": "compaction_finished", "compaction_time_micros": 108261, "compaction_time_cpu_micros": 51392, "output_level": 6, "num_output_files": 1, "total_output_size": 19651331, "num_input_records": 15442, "num_output_records": 14907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396741520, "job": 50, "event": "table_file_deletion", "file_number": 83} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930396744933, "job": 50, "event": "table_file_deletion", "file_number": 81} Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.630039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.745010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.745017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.745021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.745025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:36 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:36.745029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:26:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v956: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s wr, 0 op/s Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.703 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.704 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.704 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.704 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.983 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.984 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.984 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:26:38 localhost nova_compute[280228]: 2025-12-05 10:26:38.985 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:26:39 localhost nova_compute[280228]: 2025-12-05 10:26:39.590 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v957: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s wr, 0 op/s Dec 5 05:26:39 localhost nova_compute[280228]: 2025-12-05 10:26:39.955 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:26:40 localhost nova_compute[280228]: 2025-12-05 10:26:40.135 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:26:40 localhost nova_compute[280228]: 2025-12-05 10:26:40.135 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:26:40 localhost nova_compute[280228]: 2025-12-05 10:26:40.135 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:40 localhost nova_compute[280228]: 2025-12-05 10:26:40.136 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:40 localhost nova_compute[280228]: 2025-12-05 10:26:40.697 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:26:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:26:41 localhost podman[331418]: 2025-12-05 10:26:41.267961236 +0000 UTC m=+0.146114583 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:26:41 localhost podman[331418]: 2025-12-05 10:26:41.277356014 +0000 UTC m=+0.155509321 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:26:41 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:26:41 localhost podman[331420]: 2025-12-05 10:26:41.32515413 +0000 UTC m=+0.195573699 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm) Dec 5 05:26:41 localhost podman[331420]: 2025-12-05 10:26:41.340612145 +0000 UTC m=+0.211031714 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 5 05:26:41 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:26:41 localhost podman[331419]: 2025-12-05 10:26:41.240589097 +0000 UTC m=+0.113490453 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 5 05:26:41 localhost podman[331419]: 2025-12-05 10:26:41.422899948 +0000 UTC m=+0.295801314 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Dec 5 05:26:41 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:26:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:41 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:41.711 158820 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=22ecc443-b9ab-4c88-a730-5598bd07d403, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 5 05:26:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v958: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:42 localhost nova_compute[280228]: 2025-12-05 10:26:42.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:42 localhost nova_compute[280228]: 2025-12-05 10:26:42.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:26:43 localhost nova_compute[280228]: 2025-12-05 10:26:43.417 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:43 localhost nova_compute[280228]: 2025-12-05 10:26:43.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:43 localhost nova_compute[280228]: 2025-12-05 10:26:43.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:26:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v959: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:44 localhost nova_compute[280228]: 2025-12-05 10:26:44.592 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:26:45 Dec 5 05:26:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:26:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:26:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['vms', 'manila_metadata', 'backups', '.mgr', 'manila_data', 'volumes', 'images'] Dec 5 05:26:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:26:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:26:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:26:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:26:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:26:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v960: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:45 localhost nova_compute[280228]: 2025-12-05 10:26:45.738 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:26:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002851442542573981 of space, bias 4.0, pg target 2.269748263888889 quantized to 16 (current 16) Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:26:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:26:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:26:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:26:46 localhost nova_compute[280228]: 2025-12-05 10:26:46.324 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v961: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:26:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:26:49 localhost podman[331477]: 2025-12-05 10:26:49.213077159 +0000 UTC m=+0.089468105 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:26:49 localhost podman[331477]: 2025-12-05 10:26:49.219789865 +0000 UTC m=+0.096180801 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 5 05:26:49 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:26:49 localhost podman[331476]: 2025-12-05 10:26:49.189671321 +0000 UTC m=+0.076123246 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:26:49 localhost podman[331476]: 2025-12-05 10:26:49.276647148 +0000 UTC m=+0.163099023 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 5 05:26:49 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:26:49 localhost nova_compute[280228]: 2025-12-05 10:26:49.594 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v962: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:49 localhost podman[239519]: time="2025-12-05T10:26:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:26:49 localhost podman[239519]: @ - - [05/Dec/2025:10:26:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:26:49 localhost podman[239519]: @ - - [05/Dec/2025:10:26:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19284 "" "Go-http-client/1.1" Dec 5 05:26:50 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:26:50.387 261902 INFO neutron.agent.linux.ip_lib [None req-d7e96750-501a-48f3-885b-2c199c70c6c9 - - - - - -] Device tap0cf75712-d3 cannot be used as it has no MAC address#033[00m Dec 5 05:26:50 localhost nova_compute[280228]: 2025-12-05 10:26:50.453 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:50 localhost kernel: device tap0cf75712-d3 entered promiscuous mode Dec 5 05:26:50 localhost nova_compute[280228]: 2025-12-05 10:26:50.463 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:50 localhost NetworkManager[5960]: [1764930410.4638] manager: (tap0cf75712-d3): new Generic device (/org/freedesktop/NetworkManager/Devices/65) Dec 5 05:26:50 localhost ovn_controller[153000]: 2025-12-05T10:26:50Z|00410|binding|INFO|Claiming lport 0cf75712-d34e-4386-a69f-090c6cb89784 for this chassis. Dec 5 05:26:50 localhost ovn_controller[153000]: 2025-12-05T10:26:50Z|00411|binding|INFO|0cf75712-d34e-4386-a69f-090c6cb89784: Claiming unknown Dec 5 05:26:50 localhost systemd-udevd[331531]: Network interface NamePolicy= disabled on kernel command line. Dec 5 05:26:50 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:50.473 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-99e8910b-c5ca-4bb8-bd0f-52d544f817cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e8910b-c5ca-4bb8-bd0f-52d544f817cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0be5dd7ec9b24465a8f2ecd5c831c9a3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a658a58c-1a17-449f-b8da-51bf1cafc094, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0cf75712-d34e-4386-a69f-090c6cb89784) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:26:50 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:50.475 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0cf75712-d34e-4386-a69f-090c6cb89784 in datapath 99e8910b-c5ca-4bb8-bd0f-52d544f817cf bound to our chassis#033[00m Dec 5 05:26:50 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:50.478 158820 DEBUG neutron.agent.ovn.metadata.agent [-] Port ebe15674-3883-49e1-b1ad-485b718cbd38 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 5 05:26:50 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:50.478 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e8910b-c5ca-4bb8-bd0f-52d544f817cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:26:50 localhost ovn_metadata_agent[158815]: 2025-12-05 10:26:50.479 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9a9d51-cb4a-4c3e-b152-865c8fb96fc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost ovn_controller[153000]: 2025-12-05T10:26:50Z|00412|binding|INFO|Setting lport 0cf75712-d34e-4386-a69f-090c6cb89784 ovn-installed in OVS Dec 5 05:26:50 localhost ovn_controller[153000]: 2025-12-05T10:26:50Z|00413|binding|INFO|Setting lport 0cf75712-d34e-4386-a69f-090c6cb89784 up in Southbound Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost nova_compute[280228]: 2025-12-05 10:26:50.503 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost journal[228791]: ethtool ioctl error on tap0cf75712-d3: No such device Dec 5 05:26:50 localhost nova_compute[280228]: 2025-12-05 10:26:50.546 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:50 localhost nova_compute[280228]: 2025-12-05 10:26:50.576 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:50 localhost nova_compute[280228]: 2025-12-05 10:26:50.740 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:51 localhost nova_compute[280228]: 2025-12-05 10:26:51.210 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:51 localhost podman[331604]: Dec 5 05:26:51 localhost podman[331604]: 2025-12-05 10:26:51.521242971 +0000 UTC m=+0.085557315 container create 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 5 05:26:51 localhost systemd[1]: Started libpod-conmon-9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7.scope. Dec 5 05:26:51 localhost systemd[1]: Started libcrun container. Dec 5 05:26:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da05db1f8761c0f531a5059922d0dd00627c871fdbbb390bcef05d0fb7079a10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 5 05:26:51 localhost podman[331604]: 2025-12-05 10:26:51.478419347 +0000 UTC m=+0.042733721 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 5 05:26:51 localhost podman[331604]: 2025-12-05 10:26:51.584664136 +0000 UTC m=+0.148978470 container init 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:26:51 localhost podman[331604]: 2025-12-05 10:26:51.591694622 +0000 UTC m=+0.156008936 container start 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:26:51 localhost dnsmasq[331623]: started, version 2.85 cachesize 150 Dec 5 05:26:51 localhost dnsmasq[331623]: DNS service limited to local subnets Dec 5 05:26:51 localhost dnsmasq[331623]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 5 05:26:51 localhost dnsmasq[331623]: warning: no upstream servers configured Dec 5 05:26:51 localhost dnsmasq-dhcp[331623]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 5 05:26:51 localhost dnsmasq[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/addn_hosts - 0 addresses Dec 5 05:26:51 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/host Dec 5 05:26:51 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/opts Dec 5 05:26:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0. Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.622037) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411622121, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 402, "num_deletes": 251, "total_data_size": 150858, "memory_usage": 157936, "flush_reason": "Manual Compaction"} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411626148, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 147442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46236, "largest_seqno": 46637, "table_properties": {"data_size": 145144, "index_size": 409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6139, "raw_average_key_size": 19, "raw_value_size": 140474, "raw_average_value_size": 448, "num_data_blocks": 18, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764930396, "oldest_key_time": 1764930396, "file_creation_time": 1764930411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 4241 microseconds, and 1740 cpu microseconds. Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.626280) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 147442 bytes OK Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.626315) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.627919) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.627946) EVENT_LOG_v1 {"time_micros": 1764930411627939, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.627971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 148309, prev total WAL file size 148309, number of live WAL files 2. Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.628486) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133353534' seq:72057594037927935, type:22 .. '7061786F73003133383036' seq:0, type:0; will stop at (end) Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(143KB)], [84(18MB)] Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411628540, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 19798773, "oldest_snapshot_seqno": -1} Dec 5 05:26:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v963: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 14705 keys, 18713277 bytes, temperature: kUnknown Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411726692, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 18713277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18629178, "index_size": 46238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36805, "raw_key_size": 394811, "raw_average_key_size": 26, "raw_value_size": 18379423, "raw_average_value_size": 1249, "num_data_blocks": 1704, "num_entries": 14705, "num_filter_entries": 14705, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764928569, "oldest_key_time": 0, "file_creation_time": 1764930411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a2adafb0-ab57-42d1-a300-991623c80f9c", "db_session_id": "CJN0L4043B6ORZQ3CW6Q", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.726930) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 18713277 bytes Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.728776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.6 rd, 190.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.7 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(261.2) write-amplify(126.9) OK, records in: 15220, records dropped: 515 output_compression: NoCompression Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.728810) EVENT_LOG_v1 {"time_micros": 1764930411728795, "job": 52, "event": "compaction_finished", "compaction_time_micros": 98220, "compaction_time_cpu_micros": 50049, "output_level": 6, "num_output_files": 1, "total_output_size": 18713277, "num_input_records": 15220, "num_output_records": 14705, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411729101, "job": 52, "event": "table_file_deletion", "file_number": 86} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005546419/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764930411731881, "job": 52, "event": "table_file_deletion", "file_number": 84} Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.628433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.731991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.731998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.732003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.732007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:51 localhost ceph-mon[292820]: rocksdb: (Original Log Time 2025/12/05-10:26:51.732011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 5 05:26:51 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:26:51.789 261902 INFO neutron.agent.dhcp.agent [None req-1fca7ac2-99c7-4e64-954c-fc8e920db106 - - - - - -] DHCP configuration for ports {'ebc759a9-6506-4351-81aa-3992b70af34c'} is completed#033[00m Dec 5 05:26:52 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:26:52.098 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:26:51Z, description=, device_id=6b9cd3e2-81f5-496f-92ef-ccd184a85b8a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c707ed24-0d3a-4074-88a1-fc1f3f126182, ip_allocation=immediate, mac_address=fa:16:3e:2f:b5:20, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:26:47Z, description=, dns_domain=, id=99e8910b-c5ca-4bb8-bd0f-52d544f817cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1893127566-network, port_security_enabled=True, project_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40619, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3938, status=ACTIVE, subnets=['a8125b1a-b5d2-45f5-a854-a24a583c2763'], tags=[], tenant_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, updated_at=2025-12-05T10:26:48Z, vlan_transparent=None, network_id=99e8910b-c5ca-4bb8-bd0f-52d544f817cf, port_security_enabled=False, project_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3946, status=DOWN, tags=[], tenant_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, updated_at=2025-12-05T10:26:51Z on network 99e8910b-c5ca-4bb8-bd0f-52d544f817cf#033[00m Dec 5 05:26:52 localhost dnsmasq[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/addn_hosts - 1 addresses Dec 5 05:26:52 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/host Dec 5 05:26:52 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/opts Dec 5 05:26:52 localhost podman[331641]: 2025-12-05 10:26:52.304720161 +0000 UTC m=+0.047592581 container kill 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Dec 5 05:26:53 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:26:53.048 261902 INFO neutron.agent.dhcp.agent [None req-e1b29126-f2d1-4ff5-80f7-ce5b17b13ae5 - - - - - -] DHCP configuration for ports {'c707ed24-0d3a-4074-88a1-fc1f3f126182'} is completed#033[00m Dec 5 05:26:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v964: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:26:54.249 261902 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-05T10:26:51Z, description=, device_id=6b9cd3e2-81f5-496f-92ef-ccd184a85b8a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c707ed24-0d3a-4074-88a1-fc1f3f126182, ip_allocation=immediate, mac_address=fa:16:3e:2f:b5:20, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-05T10:26:47Z, description=, dns_domain=, id=99e8910b-c5ca-4bb8-bd0f-52d544f817cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1893127566-network, port_security_enabled=True, project_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40619, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3938, status=ACTIVE, subnets=['a8125b1a-b5d2-45f5-a854-a24a583c2763'], tags=[], tenant_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, updated_at=2025-12-05T10:26:48Z, vlan_transparent=None, network_id=99e8910b-c5ca-4bb8-bd0f-52d544f817cf, port_security_enabled=False, project_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3946, status=DOWN, tags=[], tenant_id=0be5dd7ec9b24465a8f2ecd5c831c9a3, updated_at=2025-12-05T10:26:51Z on network 99e8910b-c5ca-4bb8-bd0f-52d544f817cf#033[00m Dec 5 05:26:54 localhost dnsmasq[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/addn_hosts - 1 addresses Dec 5 05:26:54 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/host Dec 5 05:26:54 localhost podman[331679]: 2025-12-05 10:26:54.461012125 +0000 UTC m=+0.064336194 container kill 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 5 05:26:54 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/opts Dec 5 05:26:54 localhost nova_compute[280228]: 2025-12-05 10:26:54.632 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:54 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:26:54.690 261902 INFO neutron.agent.dhcp.agent [None req-93196cbb-fbe7-4d39-9300-90dcd322d582 - - - - - -] DHCP configuration for ports {'c707ed24-0d3a-4074-88a1-fc1f3f126182'} is completed#033[00m Dec 5 05:26:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v965: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:55 localhost nova_compute[280228]: 2025-12-05 10:26:55.790 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:26:57 localhost openstack_network_exporter[241668]: ERROR 10:26:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:26:57 localhost openstack_network_exporter[241668]: ERROR 10:26:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:26:57 localhost openstack_network_exporter[241668]: ERROR 10:26:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:26:57 localhost openstack_network_exporter[241668]: ERROR 10:26:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:26:57 localhost openstack_network_exporter[241668]: Dec 5 05:26:57 localhost openstack_network_exporter[241668]: ERROR 10:26:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:26:57 localhost openstack_network_exporter[241668]: Dec 5 05:26:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v966: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:26:59 localhost nova_compute[280228]: 2025-12-05 10:26:59.633 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:26:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v967: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:27:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:27:00 localhost podman[331702]: 2025-12-05 10:27:00.211620229 +0000 UTC m=+0.081221642 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Dec 5 05:27:00 localhost podman[331702]: 2025-12-05 10:27:00.25275029 +0000 UTC m=+0.122351663 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, vendor=Red Hat, Inc.) Dec 5 05:27:00 localhost podman[331701]: 2025-12-05 10:27:00.26351032 +0000 UTC m=+0.131540446 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 5 05:27:00 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:27:00 localhost podman[331701]: 2025-12-05 10:27:00.276502539 +0000 UTC m=+0.144532655 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125) Dec 5 05:27:00 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:27:00 localhost nova_compute[280228]: 2025-12-05 10:27:00.793 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v968: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:27:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2856942644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:27:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:27:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2856942644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:27:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v969: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:04.583 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:27:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:04.583 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:27:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:04.585 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:27:04 localhost nova_compute[280228]: 2025-12-05 10:27:04.695 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e298 do_prune osdmap full prune enabled Dec 5 05:27:05 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e299 e299: 6 total, 6 up, 6 in Dec 5 05:27:05 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e299: 6 total, 6 up, 6 in Dec 5 05:27:05 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v971: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:05 localhost nova_compute[280228]: 2025-12-05 10:27:05.816 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e299 do_prune osdmap full prune enabled Dec 5 05:27:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e300 e300: 6 total, 6 up, 6 in Dec 5 05:27:06 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e300: 6 total, 6 up, 6 in Dec 5 05:27:06 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:07 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v973: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 245 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.6 MiB/s wr, 46 op/s Dec 5 05:27:09 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v974: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 245 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.6 MiB/s wr, 46 op/s Dec 5 05:27:09 localhost nova_compute[280228]: 2025-12-05 10:27:09.732 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:10 localhost nova_compute[280228]: 2025-12-05 10:27:10.859 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:11 localhost ovn_controller[153000]: 2025-12-05T10:27:11Z|00414|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:27:11 localhost nova_compute[280228]: 2025-12-05 10:27:11.398 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:11 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:11 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v975: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s Dec 5 05:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:27:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:27:12 localhost podman[331740]: 2025-12-05 10:27:12.20418956 +0000 UTC m=+0.075961401 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:27:12 localhost podman[331740]: 2025-12-05 10:27:12.213029161 +0000 UTC m=+0.084801062 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 5 05:27:12 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:27:12 localhost systemd[1]: tmp-crun.rdwhnd.mount: Deactivated successfully. Dec 5 05:27:12 localhost podman[331739]: 2025-12-05 10:27:12.266524041 +0000 UTC m=+0.144364438 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 5 05:27:12 localhost podman[331744]: 2025-12-05 10:27:12.281350576 +0000 UTC m=+0.148022110 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 5 05:27:12 localhost podman[331744]: 2025-12-05 10:27:12.296641665 +0000 UTC m=+0.163313219 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 5 05:27:12 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:27:12 localhost podman[331739]: 2025-12-05 10:27:12.332806005 +0000 UTC m=+0.210646462 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 5 05:27:12 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:27:13 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v976: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.4 MiB/s wr, 51 op/s Dec 5 05:27:14 localhost nova_compute[280228]: 2025-12-05 10:27:14.735 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:27:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:27:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:27:15 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:27:15 localhost dnsmasq[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/addn_hosts - 0 addresses Dec 5 05:27:15 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/host Dec 5 05:27:15 localhost podman[331816]: 2025-12-05 10:27:15.268000729 +0000 UTC m=+0.060305350 container kill 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:27:15 localhost dnsmasq-dhcp[331623]: read /var/lib/neutron/dhcp/99e8910b-c5ca-4bb8-bd0f-52d544f817cf/opts Dec 5 05:27:15 localhost nova_compute[280228]: 2025-12-05 10:27:15.447 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:15 localhost kernel: device tap0cf75712-d3 left promiscuous mode Dec 5 05:27:15 localhost ovn_controller[153000]: 2025-12-05T10:27:15Z|00415|binding|INFO|Releasing lport 0cf75712-d34e-4386-a69f-090c6cb89784 from this chassis (sb_readonly=0) Dec 5 05:27:15 localhost ovn_controller[153000]: 2025-12-05T10:27:15Z|00416|binding|INFO|Setting lport 0cf75712-d34e-4386-a69f-090c6cb89784 down in Southbound Dec 5 05:27:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:15.457 158820 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005546419.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp70751a41-0f4a-5fcf-838b-93ea17129f30-99e8910b-c5ca-4bb8-bd0f-52d544f817cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99e8910b-c5ca-4bb8-bd0f-52d544f817cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0be5dd7ec9b24465a8f2ecd5c831c9a3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005546419.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a658a58c-1a17-449f-b8da-51bf1cafc094, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0cf75712-d34e-4386-a69f-090c6cb89784) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 5 05:27:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:15.459 158820 INFO neutron.agent.ovn.metadata.agent [-] Port 0cf75712-d34e-4386-a69f-090c6cb89784 in datapath 99e8910b-c5ca-4bb8-bd0f-52d544f817cf unbound from our chassis#033[00m Dec 5 05:27:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:15.463 158820 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99e8910b-c5ca-4bb8-bd0f-52d544f817cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 5 05:27:15 localhost ovn_metadata_agent[158815]: 2025-12-05 10:27:15.464 158926 DEBUG oslo.privsep.daemon [-] privsep: reply[4700d21b-03bb-431e-ac27-9d29e3bf9b6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 5 05:27:15 localhost nova_compute[280228]: 2025-12-05 10:27:15.467 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:15 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v977: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 44 op/s Dec 5 05:27:15 localhost nova_compute[280228]: 2025-12-05 10:27:15.887 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:27:16 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:27:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e300 do_prune osdmap full prune enabled Dec 5 05:27:16 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 e301: 6 total, 6 up, 6 in Dec 5 05:27:16 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : osdmap e301: 6 total, 6 up, 6 in Dec 5 05:27:17 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v979: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s Dec 5 05:27:18 localhost ovn_controller[153000]: 2025-12-05T10:27:18Z|00417|binding|INFO|Releasing lport 6eec4798-2413-4eda-86b7-a390f3150ec8 from this chassis (sb_readonly=0) Dec 5 05:27:18 localhost nova_compute[280228]: 2025-12-05 10:27:18.270 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:19 localhost dnsmasq[331623]: exiting on receipt of SIGTERM Dec 5 05:27:19 localhost systemd[1]: libpod-9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7.scope: Deactivated successfully. Dec 5 05:27:19 localhost podman[331856]: 2025-12-05 10:27:19.124375146 +0000 UTC m=+0.066678536 container kill 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 5 05:27:19 localhost podman[331868]: 2025-12-05 10:27:19.193351542 +0000 UTC m=+0.056977449 container died 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:27:19 localhost podman[331868]: 2025-12-05 10:27:19.228983924 +0000 UTC m=+0.092609801 container cleanup 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:27:19 localhost systemd[1]: libpod-conmon-9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7.scope: Deactivated successfully. Dec 5 05:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:27:19 localhost podman[331870]: 2025-12-05 10:27:19.277691288 +0000 UTC m=+0.133001090 container remove 9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-99e8910b-c5ca-4bb8-bd0f-52d544f817cf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 5 05:27:19 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:27:19.308 261902 INFO neutron.agent.dhcp.agent [None req-28c3d00b-544e-4293-807d-95a21fd6d0b9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:27:19 localhost neutron_dhcp_agent[261898]: 2025-12-05 10:27:19.309 261902 INFO neutron.agent.dhcp.agent [None req-28c3d00b-544e-4293-807d-95a21fd6d0b9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 5 05:27:19 localhost podman[331893]: 2025-12-05 10:27:19.351681137 +0000 UTC m=+0.083891464 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:27:19 localhost podman[331893]: 2025-12-05 10:27:19.361554741 +0000 UTC m=+0.093765068 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:27:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:27:19 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:27:19 localhost podman[331918]: 2025-12-05 10:27:19.462768865 +0000 UTC m=+0.075116765 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 5 05:27:19 localhost podman[331918]: 2025-12-05 10:27:19.553718564 +0000 UTC m=+0.166066464 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 5 05:27:19 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:27:19 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v980: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 614 B/s wr, 6 op/s Dec 5 05:27:19 localhost nova_compute[280228]: 2025-12-05 10:27:19.737 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:19 localhost podman[239519]: time="2025-12-05T10:27:19Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:27:19 localhost podman[239519]: @ - - [05/Dec/2025:10:27:19 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:27:19 localhost podman[239519]: @ - - [05/Dec/2025:10:27:19 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19281 "" "Go-http-client/1.1" Dec 5 05:27:20 localhost systemd[1]: var-lib-containers-storage-overlay-da05db1f8761c0f531a5059922d0dd00627c871fdbbb390bcef05d0fb7079a10-merged.mount: Deactivated successfully. Dec 5 05:27:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9df927be2d93c4075caaceee9627b1ab0808bec8795d0ef065521f4797d1cdc7-userdata-shm.mount: Deactivated successfully. Dec 5 05:27:20 localhost systemd[1]: run-netns-qdhcp\x2d99e8910b\x2dc5ca\x2d4bb8\x2dbd0f\x2d52d544f817cf.mount: Deactivated successfully. Dec 5 05:27:20 localhost nova_compute[280228]: 2025-12-05 10:27:20.890 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:21 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:21 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v981: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:23 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v982: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:24 localhost nova_compute[280228]: 2025-12-05 10:27:24.739 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:25 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v983: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:25 localhost nova_compute[280228]: 2025-12-05 10:27:25.927 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:26 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:27 localhost openstack_network_exporter[241668]: ERROR 10:27:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:27:27 localhost openstack_network_exporter[241668]: ERROR 10:27:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:27:27 localhost openstack_network_exporter[241668]: ERROR 10:27:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:27:27 localhost openstack_network_exporter[241668]: ERROR 10:27:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:27:27 localhost openstack_network_exporter[241668]: Dec 5 05:27:27 localhost openstack_network_exporter[241668]: ERROR 10:27:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:27:27 localhost openstack_network_exporter[241668]: Dec 5 05:27:27 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v984: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:29 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v985: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:29 localhost nova_compute[280228]: 2025-12-05 10:27:29.742 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:30 localhost nova_compute[280228]: 2025-12-05 10:27:30.969 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:27:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:27:31 localhost podman[331943]: 2025-12-05 10:27:31.204114798 +0000 UTC m=+0.090531147 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 5 05:27:31 localhost podman[331943]: 2025-12-05 10:27:31.21592562 +0000 UTC m=+0.102341979 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 5 05:27:31 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:27:31 localhost podman[331944]: 2025-12-05 10:27:31.310358507 +0000 UTC m=+0.190480333 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Dec 5 05:27:31 localhost podman[331944]: 2025-12-05 10:27:31.316734902 +0000 UTC m=+0.196856688 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 5 05:27:31 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:27:31 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:31 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v986: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:33 localhost nova_compute[280228]: 2025-12-05 10:27:33.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:33 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v987: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:34 localhost nova_compute[280228]: 2025-12-05 10:27:34.776 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.526 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.527 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.528 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.528 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Auditing locally available compute resources for np0005546419.localdomain (node: np0005546419.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.529 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:27:35 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v988: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:27:35 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1313180631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:27:35 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain.devices.0}] v 0) Dec 5 05:27:35 localhost nova_compute[280228]: 2025-12-05 10:27:35.976 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:27:35 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546419.localdomain}] v 0) Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.007 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain.devices.0}] v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005546420.localdomain}] v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.066 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.067 280232 DEBUG nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.246 280232 WARNING nova.virt.libvirt.driver [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.247 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Hypervisor/Node resource view: name=np0005546419.localdomain free_ram=11024MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.248 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.248 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.313 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Instance 96a47a1c-57c7-4bb1-aecc-33db976db8c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.313 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.313 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Final resource view: name=np0005546419.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.345 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2419476535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.832 280232 DEBUG oslo_concurrency.processutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 5 05:27:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.840 280232 DEBUG nova.compute.provider_tree [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed in ProviderTree for provider: 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.865 280232 DEBUG nova.scheduler.client.report [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Inventory has not changed for provider 6764eb33-a0ac-428c-a232-bb5bf7a96ee3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.867 280232 DEBUG nova.compute.resource_tracker [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Compute_service record updated for np0005546419.localdomain:np0005546419.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 5 05:27:36 localhost nova_compute[280228]: 2025-12-05 10:27:36.868 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:36 localhost ceph-mgr[286454]: [progress INFO root] update: starting ev f42837e2-e0f8-45a6-9518-3e724367aa40 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:27:36 localhost ceph-mgr[286454]: [progress INFO root] complete: finished ev f42837e2-e0f8-45a6-9518-3e724367aa40 (Updating node-proxy deployment (+3 -> 3)) Dec 5 05:27:36 localhost ceph-mgr[286454]: [progress INFO root] Completed event f42837e2-e0f8-45a6-9518-3e724367aa40 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 5 05:27:36 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 5 05:27:36 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 5 05:27:37 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v989: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 5 05:27:37 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:38 localhost nova_compute[280228]: 2025-12-05 10:27:38.864 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:38 localhost nova_compute[280228]: 2025-12-05 10:27:38.865 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 5 05:27:39 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v990: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.807 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.880 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquiring lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.880 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Acquired lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.880 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 5 05:27:39 localhost nova_compute[280228]: 2025-12-05 10:27:39.881 280232 DEBUG nova.objects.instance [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 96a47a1c-57c7-4bb1-aecc-33db976db8c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 5 05:27:40 localhost nova_compute[280228]: 2025-12-05 10:27:40.326 280232 DEBUG nova.network.neutron [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updating instance_info_cache with network_info: [{"id": "c2f95d81-2317-46b9-8146-596eac8f9acb", "address": "fa:16:3e:04:e6:3a", "network": {"id": "86f5c13f-3cf8-4808-86c3-060f6b38ab5b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e6ca8a92050741d3a93772e6c1b0d704", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2f95d81-23", "ovs_interfaceid": "c2f95d81-2317-46b9-8146-596eac8f9acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 5 05:27:40 localhost nova_compute[280228]: 2025-12-05 10:27:40.346 280232 DEBUG oslo_concurrency.lockutils [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Releasing lock "refresh_cache-96a47a1c-57c7-4bb1-aecc-33db976db8c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 5 05:27:40 localhost nova_compute[280228]: 2025-12-05 10:27:40.346 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] [instance: 96a47a1c-57c7-4bb1-aecc-33db976db8c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 5 05:27:40 localhost nova_compute[280228]: 2025-12-05 10:27:40.347 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:40 localhost ceph-mgr[286454]: [progress INFO root] Writing back 50 completed events Dec 5 05:27:40 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 5 05:27:40 localhost ceph-mon[292820]: log_channel(audit) log [INF] : from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:40 localhost ceph-mon[292820]: from='mgr.44372 172.18.0.106:0/3556973025' entity='mgr.np0005546419.zhsnqq' Dec 5 05:27:41 localhost nova_compute[280228]: 2025-12-05 10:27:41.045 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:41 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:41 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v991: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:42 localhost nova_compute[280228]: 2025-12-05 10:27:42.343 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6. Dec 5 05:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465. Dec 5 05:27:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a. Dec 5 05:27:43 localhost sshd[332193]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:27:43 localhost systemd[1]: tmp-crun.FYda5z.mount: Deactivated successfully. Dec 5 05:27:43 localhost podman[332170]: 2025-12-05 10:27:43.231953421 +0000 UTC m=+0.110949054 container health_status 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 5 05:27:43 localhost podman[332171]: 2025-12-05 10:27:43.271465393 +0000 UTC m=+0.149582570 container health_status 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 5 05:27:43 localhost podman[332170]: 2025-12-05 10:27:43.274527176 +0000 UTC m=+0.153522799 container exec_died 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 5 05:27:43 localhost systemd[1]: 192494c23ef66ea107a6e078b9e1581d9b5b9e1dbc23089d3d698ad02a3e76e6.service: Deactivated successfully. Dec 5 05:27:43 localhost systemd-logind[760]: New session 75 of user zuul. Dec 5 05:27:43 localhost podman[332171]: 2025-12-05 10:27:43.307493258 +0000 UTC m=+0.185610425 container exec_died 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent) Dec 5 05:27:43 localhost systemd[1]: Started Session 75 of User zuul. Dec 5 05:27:43 localhost systemd[1]: 1a95b7f3707410b316abd92b7e1c96e2efab7fcaa73194a793965abb71a52465.service: Deactivated successfully. Dec 5 05:27:43 localhost podman[332172]: 2025-12-05 10:27:43.396960912 +0000 UTC m=+0.269825918 container health_status 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 5 05:27:43 localhost podman[332172]: 2025-12-05 10:27:43.437602428 +0000 UTC m=+0.310467374 container exec_died 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 5 05:27:43 localhost systemd[1]: 6dbb66a1b92a231fe72211be3913f2247250e435cf839b71e2e1865af4605c4a.service: Deactivated successfully. Dec 5 05:27:43 localhost nova_compute[280228]: 2025-12-05 10:27:43.506 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:43 localhost nova_compute[280228]: 2025-12-05 10:27:43.507 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:43 localhost nova_compute[280228]: 2025-12-05 10:27:43.507 280232 DEBUG nova.compute.manager [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 5 05:27:43 localhost python3[332254]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-db09-90c9-00000000000c-1-overcloudnovacompute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 5 05:27:43 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v992: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:44 localhost systemd[1]: tmp-crun.XSlRW7.mount: Deactivated successfully. Dec 5 05:27:44 localhost nova_compute[280228]: 2025-12-05 10:27:44.846 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:27:45 localhost ceph-mgr[286454]: [balancer INFO root] Optimize plan auto_2025-12-05_10:27:45 Dec 5 05:27:45 localhost ceph-mgr[286454]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 5 05:27:45 localhost ceph-mgr[286454]: [balancer INFO root] do_upmap Dec 5 05:27:45 localhost ceph-mgr[286454]: [balancer INFO root] pools ['manila_metadata', 'images', 'vms', 'backups', '.mgr', 'volumes', 'manila_data'] Dec 5 05:27:45 localhost ceph-mgr[286454]: [balancer INFO root] prepared 0/10 changes Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:27:45 localhost ceph-mgr[286454]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Dec 5 05:27:45 localhost ceph-mon[292820]: log_channel(cluster) log [DBG] : mgrmap e55: np0005546419.zhsnqq(active, since 26m), standbys: np0005546420.aoeylc, np0005546421.sukfea Dec 5 05:27:45 localhost nova_compute[280228]: 2025-12-05 10:27:45.508 280232 DEBUG oslo_service.periodic_task [None req-71aaba99-9dda-4eb6-a2df-2037ad61e011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 5 05:27:45 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v993: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] _maybe_adjust Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 5 05:27:45 localhost ceph-mgr[286454]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002851442542573981 of space, bias 4.0, pg target 2.269748263888889 quantized to 16 (current 16) Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: images, start_after= Dec 5 05:27:45 localhost ceph-mgr[286454]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 5 05:27:46 localhost nova_compute[280228]: 2025-12-05 10:27:46.094 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] scanning for idle connections.. Dec 5 05:27:46 localhost ceph-mgr[286454]: [volumes INFO mgr_util] cleaning up connections: [] Dec 5 05:27:46 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:47 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v994: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 5 05:27:49 localhost systemd[1]: session-75.scope: Deactivated successfully. Dec 5 05:27:49 localhost systemd-logind[760]: Session 75 logged out. Waiting for processes to exit. Dec 5 05:27:49 localhost systemd-logind[760]: Removed session 75. Dec 5 05:27:49 localhost ovn_controller[153000]: 2025-12-05T10:27:49Z|00418|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory Dec 5 05:27:49 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v995: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 5 05:27:49 localhost nova_compute[280228]: 2025-12-05 10:27:49.849 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:49 localhost podman[239519]: time="2025-12-05T10:27:49Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 5 05:27:49 localhost podman[239519]: @ - - [05/Dec/2025:10:27:49 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 5 05:27:49 localhost podman[239519]: @ - - [05/Dec/2025:10:27:49 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19282 "" "Go-http-client/1.1" Dec 5 05:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857. Dec 5 05:27:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749. Dec 5 05:27:50 localhost systemd[1]: tmp-crun.fY5w3T.mount: Deactivated successfully. Dec 5 05:27:50 localhost podman[332257]: 2025-12-05 10:27:50.189515863 +0000 UTC m=+0.075865328 container health_status 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 5 05:27:50 localhost podman[332258]: 2025-12-05 10:27:50.245969764 +0000 UTC m=+0.127486140 container health_status eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 5 05:27:50 localhost podman[332257]: 2025-12-05 10:27:50.254562798 +0000 UTC m=+0.140912293 container exec_died 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 5 05:27:50 localhost systemd[1]: 6818c81339a322591e659b40eb82e9440c151dc82b654eab45542ccf1f850857.service: Deactivated successfully. Dec 5 05:27:50 localhost podman[332258]: 2025-12-05 10:27:50.31166552 +0000 UTC m=+0.193181906 container exec_died eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 5 05:27:50 localhost systemd[1]: eb79f12f53593e1aec8d53ace15d828b44cbef71db2c875838fcdd76f779a749.service: Deactivated successfully. Dec 5 05:27:51 localhost nova_compute[280228]: 2025-12-05 10:27:51.098 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:51 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:51 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v996: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 5 05:27:53 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v997: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 5 05:27:54 localhost nova_compute[280228]: 2025-12-05 10:27:54.858 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:55 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v998: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 5 05:27:56 localhost nova_compute[280228]: 2025-12-05 10:27:56.142 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:56 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:27:57 localhost openstack_network_exporter[241668]: ERROR 10:27:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 5 05:27:57 localhost openstack_network_exporter[241668]: ERROR 10:27:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:27:57 localhost openstack_network_exporter[241668]: ERROR 10:27:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 5 05:27:57 localhost openstack_network_exporter[241668]: ERROR 10:27:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 5 05:27:57 localhost openstack_network_exporter[241668]: Dec 5 05:27:57 localhost openstack_network_exporter[241668]: ERROR 10:27:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 5 05:27:57 localhost openstack_network_exporter[241668]: Dec 5 05:27:57 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v999: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s Dec 5 05:27:59 localhost systemd[1]: Starting dnf makecache... Dec 5 05:27:59 localhost dnf[332306]: Updating Subscription Management repositories. Dec 5 05:27:59 localhost dnf[332306]: Unable to read consumer identity Dec 5 05:27:59 localhost dnf[332306]: This system is not registered with an entitlement server. You can use subscription-manager to register. Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-barbican-42b4c41831408a8e323 66 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 80 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-cinder-1c00d6490d88e436f26ef 78 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-python-stevedore-c4acc5639fd2329372142 81 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-python-cloudkitty-tests-tempest-2c80f8 83 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 79 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 82 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-python-designate-tests-tempest-347fdbc 85 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v1000: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-glance-1fd12c29b339f30fe823e 77 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 84 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-manila-3c01b7181572c95dac462 84 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-python-whitebox-neutron-tests-tempest- 79 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost nova_compute[280228]: 2025-12-05 10:27:59.898 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-octavia-ba397f07a7331190208c 78 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-openstack-watcher-c014f81a8647287f6dcc 80 kB/s | 3.0 kB 00:00 Dec 5 05:27:59 localhost dnf[332306]: delorean-ansible-config_template-5ccaa22121a7ff 79 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 80 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: delorean-openstack-swift-dc98a8463506ac520c469a 78 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: delorean-python-tempestconf-8515371b7cceebd4282 74 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: delorean-openstack-heat-ui-013accbfd179753bc3f0 79 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: dlrn-antelope-testing 78 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: dlrn-antelope-build-deps 79 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: centos9-rabbitmq 23 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: centos9-storage 52 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: centos9-opstools 62 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: NFV SIG OpenvSwitch 74 kB/s | 3.0 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: repo-setup-centos-appstream 107 kB/s | 4.4 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: repo-setup-centos-baseos 89 kB/s | 3.9 kB 00:00 Dec 5 05:28:00 localhost dnf[332306]: repo-setup-centos-highavailability 95 kB/s | 3.9 kB 00:00 Dec 5 05:28:01 localhost dnf[332306]: repo-setup-centos-powertools 14 kB/s | 4.3 kB 00:00 Dec 5 05:28:01 localhost nova_compute[280228]: 2025-12-05 10:28:01.169 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:28:01 localhost dnf[332306]: Extra Packages for Enterprise Linux 9 - x86_64 188 kB/s | 30 kB 00:00 Dec 5 05:28:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173. Dec 5 05:28:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0. Dec 5 05:28:01 localhost podman[332346]: 2025-12-05 10:28:01.452823963 +0000 UTC m=+0.085771561 container health_status d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Dec 5 05:28:01 localhost podman[332346]: 2025-12-05 10:28:01.501793886 +0000 UTC m=+0.134741554 container exec_died d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Dec 5 05:28:01 localhost systemd[1]: tmp-crun.y76umH.mount: Deactivated successfully. Dec 5 05:28:01 localhost systemd[1]: d2ba0be8d82d9c5a5ee9cbb3e42c0c11ee33dcce79fb99d7fd27e26835810bc0.service: Deactivated successfully. Dec 5 05:28:01 localhost podman[332345]: 2025-12-05 10:28:01.52358575 +0000 UTC m=+0.156821557 container health_status 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible) Dec 5 05:28:01 localhost podman[332345]: 2025-12-05 10:28:01.538967065 +0000 UTC m=+0.172202942 container exec_died 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3) Dec 5 05:28:01 localhost systemd[1]: 8029d8237656f61b9e36dd992204b42b73429b8bd51c9f15cf612a8824308173.service: Deactivated successfully. Dec 5 05:28:01 localhost ceph-mon[292820]: mon.np0005546419@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 5 05:28:01 localhost dnf[332306]: Metadata cache created. Dec 5 05:28:01 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v1001: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s Dec 5 05:28:01 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 5 05:28:01 localhost systemd[1]: Finished dnf makecache. Dec 5 05:28:01 localhost systemd[1]: dnf-makecache.service: Consumed 1.975s CPU time. Dec 5 05:28:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 5 05:28:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3132035787' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 5 05:28:03 localhost ceph-mon[292820]: mon.np0005546419@0(leader) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 5 05:28:03 localhost ceph-mon[292820]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3132035787' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 5 05:28:03 localhost ceph-mgr[286454]: log_channel(cluster) log [DBG] : pgmap v1002: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Dec 5 05:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:28:04.583 158820 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 5 05:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:28:04.584 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 5 05:28:04 localhost ovn_metadata_agent[158815]: 2025-12-05 10:28:04.585 158820 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 5 05:28:04 localhost nova_compute[280228]: 2025-12-05 10:28:04.942 280232 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 5 05:28:05 localhost sshd[332383]: main: sshd: ssh-rsa algorithm is disabled Dec 5 05:28:05 localhost systemd-logind[760]: New session 76 of user zuul. Dec 5 05:28:05 localhost systemd[1]: Started Session 76 of User zuul.